New here?
Create an account to submit posts, participate in discussions and chat with people.
Sign up
I was listening to Doug Tenenpal on YouTube and he was watching the former Google CEO talking about how San Francisco GenAI people believe that pretty soon GenAI could write all programs and we're no longer going to need programmers.

I couldn't believe how idiotic his statements were. It's almost like he never really understood what programming is.

Let me walk you through so you understand why such statements are ridiculous.

First step: What do you call a program that writes another program? Do we have a special word for such programs? Do we distinguish between programs that write programs and programs that do not?

Answer: There is no difference. ALL programs write other programs. That's the essence of programming. Ultimately, all programs are compilers that take input in one form and translate to output in another form. The "inputs" and "outputs" aren't just text or numbers -- it can be programs as well.

The idea of a program that writes another program is one of the oldest ideas ever. It's really retarded to think that GenAI writing programs is in anyway innovative or new. If you know how GenAI actually works, you'd know that it is already a program that writes a program.

Second step: People think that "AI" has already beaten humans. I don't know why they think this, because I cannot think of a single instance where "AI" outperforms humans at anything. Remember, our brains use something like 50 W or so of energy to do its thinking. Find me an AI that runs at that wattage and let's compare an average 100 IQ individual against it.

Now, SOME programs are indeed better than humans. For instance, Stockfish is widely known to be the best chess engine in the universe. Some people might call something like Stockfish "AI" but it has nothing to do with GenAI or anything like that.

The way Stockfish works is rather simple. Humans came up with rules to tell the computer how to judge a position, how to calculate whether a move is good or not, etc... These rules are sometimes very simple, like "A rook is worth 5 pawns" or "If the king is in check and there is no move to get out of check, then the game is totally lost." Other rules are more subtle, such as "Exchanging knights for bishops is good in the early game, but bad in the later game." Anyway, Stockfish takes these rules and runs them billions of times, probing deep into all possible moves to find the best moves. After calculating the relative value of each move, it gives a suggestion on which move is best, even telling the user how many pawns the move is worth versus the other moves.

Now, Google developed some "AI" type stuff that was able to beat some human players at Go. They called it "AlphaGo" and they were super excited about it because it would train by playing itself billions of times and get better and better as it learned what works more and more. Except, AlphaGo really wasn't that great, and it didn't take long before humans figured out how to beat it at its own game. Now, the game of Go is special because rule-based systems don't work. There's just too many possible moves.

Someone took the idea of AlphaGo and trained it on chess instead, and the results were pretty bad. Stockfish can beat pretty much any AI out there handily. It can even beat GenAI that doesn't even follow the rules of chess.

Anyways, my point is this: Humans will always be better at writing programs (that write programs that write programs) than "AI" ever will be. Our programs are sometimes better than humans, but in reality, it's just humans doing human things very, very fast and accurately.

"AI" is neither artificial nor intelligent. It is built up by the very rule-based systems that people have developed over the years, thus making it not artificial, but designed by humans. (I'm thinking of "artificial" here as the "AI" writes itself. It doesn't.) And it has no semblance of intelligence at all.

At best, all of GenAI can be summed up as a way to try and guess what a human might say. It doesn't have any rational thoughts. It doesn't do anything except count the number of times a human has said X versus Y in different contexts. That's it. Even slightly below average humans will be better at specific tasks than "all the sum total knowledge of humanity" ever will be.

GenAI will not replace your job. It has been thoroughly examined and found to be marketing fluff designed to get investors to waste their money. Your company was losing money and had to find an excuse to fire you. Or they were trying to bring in more niggers, chinks, pajeets, and women to replace you, and needed an excuse to get rid of you.

3DTV was a better concept than GenAI.

Source: I used to work in AI/ML at a mega corporation. I actually learned how it all really works. I found some fundamental assumptions about it to be very incorrect. I worked with top researchers in the field and made them blush when I pointed out their logical errors. Some of them couldn't do basic calculus nor even read simple papers and understand them. At that point I knew it was a bubble (and some of the researchers confirmed it to be so) and decided that cows and sheep were a better waste of my time than AI/ML. Corporate America could go screw itself, I wasn't going to save it from itself anymore.

Go ahead and let China dominate in AI. In fact, let's tell them it is the future and that they are going to defeat America by sinking all of their resources into it. And while we're at it, let's trick all the jews to invest their fortunes into it and laugh at them when they end up penniless.
You are viewing a single comment's thread. View all
SicilianOmega on scored.co
3 days ago 0 points (+0 / -0 )
> That's where you're wrong.

> Perceptrons never worked.

And modern AI doesn't work. What would you say the difference is?
Toast message