New here?
Create an account to submit posts, participate in discussions and chat with people.
Sign up
I was listening to Doug Tenenpal on YouTube and he was watching the former Google CEO talking about how San Francisco GenAI people believe that pretty soon GenAI could write all programs and we're no longer going to need programmers.

I couldn't believe how idiotic his statements were. It's almost like he never really understood what programming is.

Let me walk you through so you understand why such statements are ridiculous.

First step: What do you call a program that writes another program? Do we have a special word for such programs? Do we distinguish between programs that write programs and programs that do not?

Answer: There is no difference. ALL programs write other programs. That's the essence of programming. Ultimately, all programs are compilers that take input in one form and translate to output in another form. The "inputs" and "outputs" aren't just text or numbers -- it can be programs as well.

The idea of a program that writes another program is one of the oldest ideas ever. It's really retarded to think that GenAI writing programs is in anyway innovative or new. If you know how GenAI actually works, you'd know that it is already a program that writes a program.

Second step: People think that "AI" has already beaten humans. I don't know why they think this, because I cannot think of a single instance where "AI" outperforms humans at anything. Remember, our brains use something like 50 W or so of energy to do its thinking. Find me an AI that runs at that wattage and let's compare an average 100 IQ individual against it.

Now, SOME programs are indeed better than humans. For instance, Stockfish is widely known to be the best chess engine in the universe. Some people might call something like Stockfish "AI" but it has nothing to do with GenAI or anything like that.

The way Stockfish works is rather simple. Humans came up with rules to tell the computer how to judge a position, how to calculate whether a move is good or not, etc... These rules are sometimes very simple, like "A rook is worth 5 pawns" or "If the king is in check and there is no move to get out of check, then the game is totally lost." Other rules are more subtle, such as "Exchanging knights for bishops is good in the early game, but bad in the later game." Anyway, Stockfish takes these rules and runs them billions of times, probing deep into all possible moves to find the best moves. After calculating the relative value of each move, it gives a suggestion on which move is best, even telling the user how many pawns the move is worth versus the other moves.

Now, Google developed some "AI" type stuff that was able to beat some human players at Go. They called it "AlphaGo" and they were super excited about it because it would train by playing itself billions of times and get better and better as it learned what works more and more. Except, AlphaGo really wasn't that great, and it didn't take long before humans figured out how to beat it at its own game. Now, the game of Go is special because rule-based systems don't work. There's just too many possible moves.

Someone took the idea of AlphaGo and trained it on chess instead, and the results were pretty bad. Stockfish can beat pretty much any AI out there handily. It can even beat GenAI that doesn't even follow the rules of chess.

Anyways, my point is this: Humans will always be better at writing programs (that write programs that write programs) than "AI" ever will be. Our programs are sometimes better than humans, but in reality, it's just humans doing human things very, very fast and accurately.

"AI" is neither artificial nor intelligent. It is built up by the very rule-based systems that people have developed over the years, thus making it not artificial, but designed by humans. (I'm thinking of "artificial" here as the "AI" writes itself. It doesn't.) And it has no semblance of intelligence at all.

At best, all of GenAI can be summed up as a way to try and guess what a human might say. It doesn't have any rational thoughts. It doesn't do anything except count the number of times a human has said X versus Y in different contexts. That's it. Even slightly below average humans will be better at specific tasks than "all the sum total knowledge of humanity" ever will be.

GenAI will not replace your job. It has been thoroughly examined and found to be marketing fluff designed to get investors to waste their money. Your company was losing money and had to find an excuse to fire you. Or they were trying to bring in more niggers, chinks, pajeets, and women to replace you, and needed an excuse to get rid of you.

3DTV was a better concept than GenAI.

Source: I used to work in AI/ML at a mega corporation. I actually learned how it all really works. I found some fundamental assumptions about it to be very incorrect. I worked with top researchers in the field and made them blush when I pointed out their logical errors. Some of them couldn't do basic calculus nor even read simple papers and understand them. At that point I knew it was a bubble (and some of the researchers confirmed it to be so) and decided that cows and sheep were a better waste of my time than AI/ML. Corporate America could go screw itself, I wasn't going to save it from itself anymore.

Go ahead and let China dominate in AI. In fact, let's tell them it is the future and that they are going to defeat America by sinking all of their resources into it. And while we're at it, let's trick all the jews to invest their fortunes into it and laugh at them when they end up penniless.
You must log in or sign up to comment
15 comments:
justaddwater on scored.co
4 days ago 1 point (+0 / -0 / +1Score on mirror ) 1 child
Goy, don't use the tool that's making everyone more productive and profitable. Goy, let the jews invest in it and let china own it.

Fuckin faggot jew bullshit with a wall of text lol
MI7BZ3EW on scored.co
3 days ago 0 points (+0 / -0 ) 1 child
Reading hard.

Grog not understand.

Me bang rock and spark make fire.
-1
justaddwater on scored.co
3 days ago -1 points (+0 / -0 / -1Score on mirror )
You are obviously super smart and far superior to all. Lolol fuckin kike
SicilianOmega on scored.co
4 days ago 1 point (+0 / -0 / +1Score on mirror ) 1 child
"Artificial" means "made by man," so AI is indeed artificial.
MI7BZ3EW on scored.co
4 days ago 0 points (+0 / -0 ) 1 child
If you think that AI means that someone wrote a program and that program became intelligent you're thinking of something completely different than what people are talking about with AI. That's simply not what is happening today.

The idea behind modern AI is that the "I" creates itself. Or rather, humans setup the conditions where the program is able to write or correct itself until it becomes intelligent. For instance, AlphaGo is "AI" not because someone "artificed" AlphaGo, but because it wrote itself within the confines of the experiments that humans set it to run in. Thus, the "A" part implies that the "I" created itself.

There used to be the idea that humans could write a program that would be intelligent, and that's what we used to mean with AI, but that was back in the 70s before it became clear that such a task was impossible.

If you wanted to describe such an intelligent program, we would just call it a program, or a rules-based system like Stockfish. It would not be considered AI by anyone nowadays, even though people in the 70s would call it AI. (Think HAL 9000. It was programmed by humans.)

What's funny is rules-based chatbots always perform better than GenAI with a sufficient set of rules. We've written programs that passed the Turing Test a long time ago. It's old tech. Thus, 70's "AI" is better than 2020's "AI".
SicilianOmega on scored.co
4 days ago 1 point (+0 / -0 / +1Score on mirror ) 1 child
AI is still created by man, no matter how many algorithms are between the programmer and the fully "trained" model. TensorFlow didn't write itself.

Modern AI is just the Perceptron scaled to a massive size. It's been around as long as "classic" AI.
MI7BZ3EW on scored.co
3 days ago 0 points (+0 / -0 ) 1 child
That's where you're wrong.

Perceptrons never worked.

Modern AI is overfitting of data but to such a degree of magnitude that the vast majority of people don't realize that they just invented a gibberish machine.

I've been there looking at the fundamental concepts of how it works. And it doesn't work. It's all smoke and mirrors.

We've written a program that takes data, separates out some of it as test data and leaves the rest as training data, then determines how the model can be modified to more closely fit the training data. We'll just calculate the first derivative to see where the local maxima are, I'm sure nothing can go wrong there, particularly with integers. Nevermind that we are in N^2 space so the chance that a local maxima isn't even remotely close to the global maxima isn't a concern. (It is. BIGLY.)

Then it runs the test data to see how close it is. Oh, it did pretty bad. 10% is not great. But we can do it again and hope we get 12%!

The thing is, you're not supposed to train the model on the test data because you can fall into the trap of overfitting. Guess what they are doing? They take one version of the model as the model to be trained for the next iteration. The test data goes back into the pile of data, another set of test data is separated out, and now you're training on the test data from the previous run, and the test data of the current run was the training data for the previous run. Cool, now we're at 16%! Do it again, 18%, and do it a thousand more times, 25%. Keep going -- a million times we get to 60%. Pretty nice! A billion times -- 75%. Ten billion gives us 80%. Can we do a trillion? No, but even if we could, it would be 82%. If we could do a thousand trillion times, it would probably level off at 85%.

Meanwhile, a newborn baby is at 95%.

But make the numbers really, really big and hope no one notices. Spend billions on hardware and electricity and hope no one notices. Invent new terminology for old concepts like matrices and hope no one notices. Just talk about things that don't matter don't you dare look at the man behind the curtain!

It's overfitting but on an epic scale. Let's not use a polynomial of degree N to fit N points of data. No, let's use a 10^N polynomial, but make it sufficiently complex that no one notices it's just a polynomial with some constraints thrown in because if the constraints where there it would become obvious how dumb the model really is.

My favorite part of the whole charade? If some of the data "doesn't work", just throw it out. We don't want "naughty" data in the data source. There's no way that the model would ever see such naughty data in the wild! The bane of all physics, really. "We should never see those numbers, so let's ignore them when we see them. Oh look, we never see those numbers now!"
SicilianOmega on scored.co
3 days ago 0 points (+0 / -0 )
> That's where you're wrong.

> Perceptrons never worked.

And modern AI doesn't work. What would you say the difference is?
SaltyJollyRoger on scored.co
3 days ago 1 point (+0 / -0 / +1Score on mirror )
This reminds me of a story that came out a few months ago that tried to convince people a LLM had tried to "escape" by copying itself onto another computer.

It's difficult to get people to understand they're dealing with Cleverbot 2.0, and not a computer sentience.
ScallionPancake on scored.co
4 days ago 0 points (+0 / -0 ) 1 child
I don’t know man.. there are certain tasks it does pretty well. Like generate a ton of text (the ultimate plagiarism engine) in a short amount of time. I can totally see a teams of paperwork monkeys being replaced by one guy armed with a chatbot.
MI7BZ3EW on scored.co
3 days ago 0 points (+0 / -0 )
I don't see a problem with it if the text is going to be read by no one.

I see a big problem if people expect people to read AI generated text.

We read not to check off a box that we completed a random task but to communicate, mind-to-mind. Take the AI out of the equation so we can fully connect our minds together.
the-new-style on scored.co
4 days ago 0 points (+0 / -0 ) 1 child
You're right but for all the wrong reasons.

Alpha Go and Stockfish are Reinforcement learning not generative.

The thing about programing is that the hardest part of the job _is not generating the code_.

The hardest part is requirements gathering and thinking of what to generate.

Mechanical Diggers put Guys With Spades out of business. But we still need mechanical digger operators.

CNC put manual machinists out of business but we still have CNC operators / programmers.

I've been coding for nigh on 45 years and I use a Coding Asistant and I think it's great.
MI7BZ3EW on scored.co
3 days ago 0 points (+0 / -0 )
GenAI is built with reinforcement techniques. They are not mutually exclusive.

> the hardest part is requirements gathering and thinking of what to generate

True

> mechanical diggers

The thing about software that people don't understand is that we're like Green Lantern. If we can imagine it, we can do it. We have mechanical digger factories that spawn mechanical diggers specifically suited for the digging task at hand. We have factories that make factories based on the size of the task at hand. We have factories that make factories that make factories that make...

The biggest challenge we run into is people abstract away key details and then wonder why nothing works anymore. IE, we have a function that is really O(n) but it looks like it should be O(n log n) or something and then next thing you know, we've introduces an O(n^2) without realizing it. Seems fine in testing, until we put it under load, and then it completely stops working.

> CNC

The old machinists that knew how to "feel" whether the tool speed was right are still around, and are stilling "feeling" the CNC machine. They can hear the chatter and know whether it's bad or not. I don't think there is a CNC machine in the world that doesn't have a machinist hovering over the "STOP!" button, or at least close by reading a tool catalogue.

> coding assistant

Heretic.

I'm still using ViM and I write BASH scripts to automate stuff. Oh, and I read the manuals. For fun.
BillboDickens on scored.co
3 days ago 0 points (+0 / -0 ) 1 child
I don’t know about most of this post, the but the general idea that AI will take our jobs, especially programming jobs, is far in the future. Having used Claude, GPT, deep seek, and Gemini LLM integrated into a code IDE, these are far from ready to replace humans.

 I hear some CEOs refusing to hire entry level engineers, thinking that these LLMs can replace them. I frankly think that is a big mistake that will bite them hard in the future.

The code produced by these agents is riddled with compiler errors and bugs. And worse, if working with an existing code base, even a small one, LLMs struggle to modify and add to the code base without injection many bugs and errors. No matter the size of the context window, they seem to struggle.

They’re best used as a tool to help aid in software engineering, not replace the engineer altogether.
MI7BZ3EW on scored.co
3 days ago 0 points (+0 / -0 )
> far in the future

So, the idea that we provide some sort of input and a computer program translates that to a computer program is nothing new, agreed?

The question then is whether GenAI can be the program that creates programs. The answer is obviously not.

I can write a program that can create programs way better than any GenAI can. We have tons of examples of these programs out in the wild right now.

The idea that you can provide limited input with contradictory or vague directions on what sort of program you want is never going to succeed. At best, you can have a program that basically creates programs that have already been created, or slightly modifies them to supposedly suit your needs.

GenAI writing our software is a joke. It can't even write English or any other language. All it does is copy what it's seen before. I guess if you want more programs that have already been written, go ahead and use GenAI, but don't expect anything new with it.

> CEOs refusing to hire entry level engineers

Tale as old as time.

* Be rich man
* Want something
* Hire someone who knows how to get it
* Get the thing you want
* Want something cheaper this time
* Hire someone who doesn't know how to get it
* Don't get the thing you want
* Shake your fist at God and curse him!

There are plenty of engineers out there who know how to create the programs that people need, and do it as cheaply as possible. They charge $300k minimum per year, if not more, and expect to own a part of the company. We know that these guys can work miracles, just like plumbers can fix you leaky toilet and electricians can wire up a house.

Hire cheap labor and get more expensive problems.

> They're best used as a tool

No they're not.

What sort of idiot would trust it to even assist them in writing code?

If I could simplify the task of writing code, I would package it as a function in code. The reason why code is complex is because the job is complex.

I find it hilarious that people think they can do their job better if they skip reading the "boring" documentation and ask someone else to summarize it for them. Hey, idiot, if we could summarize it we would've done so already. There's a reason why your programming manual has 600 pages of detailed instructions on how each little function actually works. YOU HAVE TO KNOW THESE THINGS TO PROGRAM. If we could make a shortcut we would!

I made my bread and butter explaining to people who have sub-average IQs why we can't just buy more computers and solve all the problems they are having, why you need to change how everything works once you reach a certain point in performance, and why you can't predict how a system of sufficient complexity will behave and why we need to monitor key elements of that system to try to understand its behavior.

And then I made a bunch of money teaching above-average IQ people how to make those things actually happen and how to keep their system running even when things start to fail.

If I could've packaged all that tribal knowledge into a package, I could've made billions. But it can't be done, because the nature of the problem is of the sort where general solutions will not work.

There is no substitute for humans understanding how something works and figuring out how the pieces can fit together. GenAI, or any AI, will never achieve anything like that.
Toast message