New here?
Create an account to submit posts, participate in discussions and chat with people.
Sign up
I was listening to Doug Tenenpal on YouTube and he was watching the former Google CEO talking about how San Francisco GenAI people believe that pretty soon GenAI could write all programs and we're no longer going to need programmers.

I couldn't believe how idiotic his statements were. It's almost like he never really understood what programming is.

Let me walk you through so you understand why such statements are ridiculous.

First step: What do you call a program that writes another program? Do we have a special word for such programs? Do we distinguish between programs that write programs and programs that do not?

Answer: There is no difference. ALL programs write other programs. That's the essence of programming. Ultimately, all programs are compilers that take input in one form and translate to output in another form. The "inputs" and "outputs" aren't just text or numbers -- it can be programs as well.

The idea of a program that writes another program is one of the oldest ideas ever. It's really retarded to think that GenAI writing programs is in anyway innovative or new. If you know how GenAI actually works, you'd know that it is already a program that writes a program.

Second step: People think that "AI" has already beaten humans. I don't know why they think this, because I cannot think of a single instance where "AI" outperforms humans at anything. Remember, our brains use something like 50 W or so of energy to do its thinking. Find me an AI that runs at that wattage and let's compare an average 100 IQ individual against it.

Now, SOME programs are indeed better than humans. For instance, Stockfish is widely known to be the best chess engine in the universe. Some people might call something like Stockfish "AI" but it has nothing to do with GenAI or anything like that.

The way Stockfish works is rather simple. Humans came up with rules to tell the computer how to judge a position, how to calculate whether a move is good or not, etc... These rules are sometimes very simple, like "A rook is worth 5 pawns" or "If the king is in check and there is no move to get out of check, then the game is totally lost." Other rules are more subtle, such as "Exchanging knights for bishops is good in the early game, but bad in the later game." Anyway, Stockfish takes these rules and runs them billions of times, probing deep into all possible moves to find the best moves. After calculating the relative value of each move, it gives a suggestion on which move is best, even telling the user how many pawns the move is worth versus the other moves.

Now, Google developed some "AI" type stuff that was able to beat some human players at Go. They called it "AlphaGo" and they were super excited about it because it would train by playing itself billions of times and get better and better as it learned what works more and more. Except, AlphaGo really wasn't that great, and it didn't take long before humans figured out how to beat it at its own game. Now, the game of Go is special because rule-based systems don't work. There's just too many possible moves.

Someone took the idea of AlphaGo and trained it on chess instead, and the results were pretty bad. Stockfish can beat pretty much any AI out there handily. It can even beat GenAI that doesn't even follow the rules of chess.

Anyways, my point is this: Humans will always be better at writing programs (that write programs that write programs) than "AI" ever will be. Our programs are sometimes better than humans, but in reality, it's just humans doing human things very, very fast and accurately.

"AI" is neither artificial nor intelligent. It is built up by the very rule-based systems that people have developed over the years, thus making it not artificial, but designed by humans. (I'm thinking of "artificial" here as the "AI" writes itself. It doesn't.) And it has no semblance of intelligence at all.

At best, all of GenAI can be summed up as a way to try and guess what a human might say. It doesn't have any rational thoughts. It doesn't do anything except count the number of times a human has said X versus Y in different contexts. That's it. Even slightly below average humans will be better at specific tasks than "all the sum total knowledge of humanity" ever will be.

GenAI will not replace your job. It has been thoroughly examined and found to be marketing fluff designed to get investors to waste their money. Your company was losing money and had to find an excuse to fire you. Or they were trying to bring in more niggers, chinks, pajeets, and women to replace you, and needed an excuse to get rid of you.

3DTV was a better concept than GenAI.

Source: I used to work in AI/ML at a mega corporation. I actually learned how it all really works. I found some fundamental assumptions about it to be very incorrect. I worked with top researchers in the field and made them blush when I pointed out their logical errors. Some of them couldn't do basic calculus nor even read simple papers and understand them. At that point I knew it was a bubble (and some of the researchers confirmed it to be so) and decided that cows and sheep were a better waste of my time than AI/ML. Corporate America could go screw itself, I wasn't going to save it from itself anymore.

Go ahead and let China dominate in AI. In fact, let's tell them it is the future and that they are going to defeat America by sinking all of their resources into it. And while we're at it, let's trick all the jews to invest their fortunes into it and laugh at them when they end up penniless.
You are viewing a single comment's thread. View all
MI7BZ3EW on scored.co
3 days ago 0 points (+0 / -0 ) 1 child
If you think that AI means that someone wrote a program and that program became intelligent you're thinking of something completely different than what people are talking about with AI. That's simply not what is happening today.

The idea behind modern AI is that the "I" creates itself. Or rather, humans setup the conditions where the program is able to write or correct itself until it becomes intelligent. For instance, AlphaGo is "AI" not because someone "artificed" AlphaGo, but because it wrote itself within the confines of the experiments that humans set it to run in. Thus, the "A" part implies that the "I" created itself.

There used to be the idea that humans could write a program that would be intelligent, and that's what we used to mean with AI, but that was back in the 70s before it became clear that such a task was impossible.

If you wanted to describe such an intelligent program, we would just call it a program, or a rules-based system like Stockfish. It would not be considered AI by anyone nowadays, even though people in the 70s would call it AI. (Think HAL 9000. It was programmed by humans.)

What's funny is rules-based chatbots always perform better than GenAI with a sufficient set of rules. We've written programs that passed the Turing Test a long time ago. It's old tech. Thus, 70's "AI" is better than 2020's "AI".
SicilianOmega on scored.co
3 days ago 1 point (+0 / -0 / +1Score on mirror ) 1 child
AI is still created by man, no matter how many algorithms are between the programmer and the fully "trained" model. TensorFlow didn't write itself.

Modern AI is just the Perceptron scaled to a massive size. It's been around as long as "classic" AI.
MI7BZ3EW on scored.co
3 days ago 0 points (+0 / -0 ) 1 child
That's where you're wrong.

Perceptrons never worked.

Modern AI is overfitting of data but to such a degree of magnitude that the vast majority of people don't realize that they just invented a gibberish machine.

I've been there looking at the fundamental concepts of how it works. And it doesn't work. It's all smoke and mirrors.

We've written a program that takes data, separates out some of it as test data and leaves the rest as training data, then determines how the model can be modified to more closely fit the training data. We'll just calculate the first derivative to see where the local maxima are, I'm sure nothing can go wrong there, particularly with integers. Nevermind that we are in N^2 space so the chance that a local maxima isn't even remotely close to the global maxima isn't a concern. (It is. BIGLY.)

Then it runs the test data to see how close it is. Oh, it did pretty bad. 10% is not great. But we can do it again and hope we get 12%!

The thing is, you're not supposed to train the model on the test data because you can fall into the trap of overfitting. Guess what they are doing? They take one version of the model as the model to be trained for the next iteration. The test data goes back into the pile of data, another set of test data is separated out, and now you're training on the test data from the previous run, and the test data of the current run was the training data for the previous run. Cool, now we're at 16%! Do it again, 18%, and do it a thousand more times, 25%. Keep going -- a million times we get to 60%. Pretty nice! A billion times -- 75%. Ten billion gives us 80%. Can we do a trillion? No, but even if we could, it would be 82%. If we could do a thousand trillion times, it would probably level off at 85%.

Meanwhile, a newborn baby is at 95%.

But make the numbers really, really big and hope no one notices. Spend billions on hardware and electricity and hope no one notices. Invent new terminology for old concepts like matrices and hope no one notices. Just talk about things that don't matter don't you dare look at the man behind the curtain!

It's overfitting but on an epic scale. Let's not use a polynomial of degree N to fit N points of data. No, let's use a 10^N polynomial, but make it sufficiently complex that no one notices it's just a polynomial with some constraints thrown in because if the constraints where there it would become obvious how dumb the model really is.

My favorite part of the whole charade? If some of the data "doesn't work", just throw it out. We don't want "naughty" data in the data source. There's no way that the model would ever see such naughty data in the wild! The bane of all physics, really. "We should never see those numbers, so let's ignore them when we see them. Oh look, we never see those numbers now!"
SicilianOmega on scored.co
3 days ago 0 points (+0 / -0 )
> That's where you're wrong.

> Perceptrons never worked.

And modern AI doesn't work. What would you say the difference is?
Toast message