New here?
Create an account to submit posts, participate in discussions and chat with people.
Sign up
19
posted 10 months ago by MickHigan2 on scored.co (+0 / -0 / +19Score on mirror )
You must log in or sign up to comment
4 comments:
Yggdrasill on scored.co
10 months ago 2 points (+0 / -0 / +2Score on mirror ) 1 child
I was reading an article that explained how these LLMs can import the rules to a game like chess and you play it and it breaks the rules all the time:

https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread

It’s basically saying that the reason AI often “hallucinates” (if you use it for anything technical you’ll notice this, it often just gives you totally incorrect syntax or changes and answer without explanation) is because it doesn’t have a “model” of the world. Which is by design, they want to see if they can have intelligence come out of nothing in a way, but this guy is saying it’s fundamentally not possible without, in computing terms he seemed to indicate you have to have some sort of underlying database for it to be able to have a “model” of the world. Which AI fundamentally doesn’t have.

Anyway, that article made me less worried that AI is actually going to become sentient, at least in this current model, but it made me more worried that someone is going to blow up the world allowing a thing that hallucinates by design control key things.

Long way of saying I’m sure it wasn’t sentience and it was just them fiddling with levers, probably to get data on the response.
PurestEvil on scored.co
10 months ago 2 points (+0 / -0 / +2Score on mirror ) 1 child
> Anyway, that article made me less worried that AI is actually going to become sentient

As I code, I noticed that LLMs are a little redundant. Recently I asked a question about how to solve something algorithmically well, and the answer was not satisfying. Aside from that, I also said what I intend to do, and it was me who had to notice a problem. As I explained it, I already had the better version in mind.

The thing is, often when you know the question, you also know the answer. In the process of thinking it does not provide deviating ways to solve it. When you already know the question, you already know the answer.

Only when you already know the question, but still need the worked out solution, it's good. For example "give me a function that does [this]."

Abstract problem solving is not among the abilities of LLMs. It's just a big talker. To run a physical robot requires a different type of technology - it has to be able to make reasonable decisions continuously.
Yggdrasill on scored.co
10 months ago 1 point (+0 / -0 / +1Score on mirror )
Yeah I’ve seen the same thing. It also is dependent on the data sets it has access to, so if you’re asking it about something where it has a ton of code repos to look through, it will probably get you a decent answer. But if you’re asking about something that’s new and there’s not much documentation on, or something where it doesn’t have access to any of that product’s info in its data sets, it often just spits out nonsense - like I find that it will give you syntax for different programming languages sometimes when working with technologies that it probably doesn’t have much data on.

And in that sense, yeah it’s basically just a big talker, it’s a more efficient search engine, and it gives you some decent examples if it has data on the thing you’re asking about. But yeah, the idea of “sentience” or even intelligence, is kind of moot in my mind if something isn't capable of “retaining” a history, like a working memory of what it knows. Like is something “intelligent” if it knows something one second and forgets it the next? It’s more akin to the Greek “poesis” to me than intelligence - poesis is the root of “poetry”, “the process of emergence of something that did not previously exist.” But I would say that’s more like “inspiration” than “intelligence”.
Fabius on scored.co
10 months ago 0 points (+0 / -0 )
If it is rational, it will be.
Toast message