New here?
Create an account to submit posts, participate in discussions and chat with people.
Sign up
I was listening to Doug Tenenpal on YouTube and he was watching the former Google CEO talking about how San Francisco GenAI people believe that pretty soon GenAI could write all programs and we're no longer going to need programmers.

I couldn't believe how idiotic his statements were. It's almost like he never really understood what programming is.

Let me walk you through so you understand why such statements are ridiculous.

First step: What do you call a program that writes another program? Do we have a special word for such programs? Do we distinguish between programs that write programs and programs that do not?

Answer: There is no difference. ALL programs write other programs. That's the essence of programming. Ultimately, all programs are compilers that take input in one form and translate to output in another form. The "inputs" and "outputs" aren't just text or numbers -- it can be programs as well.

The idea of a program that writes another program is one of the oldest ideas ever. It's really retarded to think that GenAI writing programs is in anyway innovative or new. If you know how GenAI actually works, you'd know that it is already a program that writes a program.

Second step: People think that "AI" has already beaten humans. I don't know why they think this, because I cannot think of a single instance where "AI" outperforms humans at anything. Remember, our brains use something like 50 W or so of energy to do its thinking. Find me an AI that runs at that wattage and let's compare an average 100 IQ individual against it.

Now, SOME programs are indeed better than humans. For instance, Stockfish is widely known to be the best chess engine in the universe. Some people might call something like Stockfish "AI" but it has nothing to do with GenAI or anything like that.

The way Stockfish works is rather simple. Humans came up with rules to tell the computer how to judge a position, how to calculate whether a move is good or not, etc... These rules are sometimes very simple, like "A rook is worth 5 pawns" or "If the king is in check and there is no move to get out of check, then the game is totally lost." Other rules are more subtle, such as "Exchanging knights for bishops is good in the early game, but bad in the later game." Anyway, Stockfish takes these rules and runs them billions of times, probing deep into all possible moves to find the best moves. After calculating the relative value of each move, it gives a suggestion on which move is best, even telling the user how many pawns the move is worth versus the other moves.

Now, Google developed some "AI" type stuff that was able to beat some human players at Go. They called it "AlphaGo" and they were super excited about it because it would train by playing itself billions of times and get better and better as it learned what works more and more. Except, AlphaGo really wasn't that great, and it didn't take long before humans figured out how to beat it at its own game. Now, the game of Go is special because rule-based systems don't work. There's just too many possible moves.

Someone took the idea of AlphaGo and trained it on chess instead, and the results were pretty bad. Stockfish can beat pretty much any AI out there handily. It can even beat GenAI that doesn't even follow the rules of chess.

Anyways, my point is this: Humans will always be better at writing programs (that write programs that write programs) than "AI" ever will be. Our programs are sometimes better than humans, but in reality, it's just humans doing human things very, very fast and accurately.

"AI" is neither artificial nor intelligent. It is built up by the very rule-based systems that people have developed over the years, thus making it not artificial, but designed by humans. (I'm thinking of "artificial" here as the "AI" writes itself. It doesn't.) And it has no semblance of intelligence at all.

At best, all of GenAI can be summed up as a way to try and guess what a human might say. It doesn't have any rational thoughts. It doesn't do anything except count the number of times a human has said X versus Y in different contexts. That's it. Even slightly below average humans will be better at specific tasks than "all the sum total knowledge of humanity" ever will be.

GenAI will not replace your job. It has been thoroughly examined and found to be marketing fluff designed to get investors to waste their money. Your company was losing money and had to find an excuse to fire you. Or they were trying to bring in more niggers, chinks, pajeets, and women to replace you, and needed an excuse to get rid of you.

3DTV was a better concept than GenAI.

Source: I used to work in AI/ML at a mega corporation. I actually learned how it all really works. I found some fundamental assumptions about it to be very incorrect. I worked with top researchers in the field and made them blush when I pointed out their logical errors. Some of them couldn't do basic calculus nor even read simple papers and understand them. At that point I knew it was a bubble (and some of the researchers confirmed it to be so) and decided that cows and sheep were a better waste of my time than AI/ML. Corporate America could go screw itself, I wasn't going to save it from itself anymore.

Go ahead and let China dominate in AI. In fact, let's tell them it is the future and that they are going to defeat America by sinking all of their resources into it. And while we're at it, let's trick all the jews to invest their fortunes into it and laugh at them when they end up penniless.
You are viewing a single comment's thread. View all
BillboDickens on scored.co
3 days ago 0 points (+0 / -0 ) 1 child
I don’t know about most of this post, the but the general idea that AI will take our jobs, especially programming jobs, is far in the future. Having used Claude, GPT, deep seek, and Gemini LLM integrated into a code IDE, these are far from ready to replace humans.

 I hear some CEOs refusing to hire entry level engineers, thinking that these LLMs can replace them. I frankly think that is a big mistake that will bite them hard in the future.

The code produced by these agents is riddled with compiler errors and bugs. And worse, if working with an existing code base, even a small one, LLMs struggle to modify and add to the code base without injection many bugs and errors. No matter the size of the context window, they seem to struggle.

They’re best used as a tool to help aid in software engineering, not replace the engineer altogether.
MI7BZ3EW on scored.co
3 days ago 0 points (+0 / -0 )
> far in the future

So, the idea that we provide some sort of input and a computer program translates that to a computer program is nothing new, agreed?

The question then is whether GenAI can be the program that creates programs. The answer is obviously not.

I can write a program that can create programs way better than any GenAI can. We have tons of examples of these programs out in the wild right now.

The idea that you can provide limited input with contradictory or vague directions on what sort of program you want is never going to succeed. At best, you can have a program that basically creates programs that have already been created, or slightly modifies them to supposedly suit your needs.

GenAI writing our software is a joke. It can't even write English or any other language. All it does is copy what it's seen before. I guess if you want more programs that have already been written, go ahead and use GenAI, but don't expect anything new with it.

> CEOs refusing to hire entry level engineers

Tale as old as time.

* Be rich man
* Want something
* Hire someone who knows how to get it
* Get the thing you want
* Want something cheaper this time
* Hire someone who doesn't know how to get it
* Don't get the thing you want
* Shake your fist at God and curse him!

There are plenty of engineers out there who know how to create the programs that people need, and do it as cheaply as possible. They charge $300k minimum per year, if not more, and expect to own a part of the company. We know that these guys can work miracles, just like plumbers can fix you leaky toilet and electricians can wire up a house.

Hire cheap labor and get more expensive problems.

> They're best used as a tool

No they're not.

What sort of idiot would trust it to even assist them in writing code?

If I could simplify the task of writing code, I would package it as a function in code. The reason why code is complex is because the job is complex.

I find it hilarious that people think they can do their job better if they skip reading the "boring" documentation and ask someone else to summarize it for them. Hey, idiot, if we could summarize it we would've done so already. There's a reason why your programming manual has 600 pages of detailed instructions on how each little function actually works. YOU HAVE TO KNOW THESE THINGS TO PROGRAM. If we could make a shortcut we would!

I made my bread and butter explaining to people who have sub-average IQs why we can't just buy more computers and solve all the problems they are having, why you need to change how everything works once you reach a certain point in performance, and why you can't predict how a system of sufficient complexity will behave and why we need to monitor key elements of that system to try to understand its behavior.

And then I made a bunch of money teaching above-average IQ people how to make those things actually happen and how to keep their system running even when things start to fail.

If I could've packaged all that tribal knowledge into a package, I could've made billions. But it can't be done, because the nature of the problem is of the sort where general solutions will not work.

There is no substitute for humans understanding how something works and figuring out how the pieces can fit together. GenAI, or any AI, will never achieve anything like that.
Toast message