New here?
Create an account to submit posts, participate in discussions and chat with people.
Sign up
I was speaking to my friend from undergrad who’s a PhD student at Stanford, he works on computer vision. I was amazed by the amount of progress purportedly being made in the field, and in AI more broadly.

He recommended I watch the demo of Tesla’s Optimus robot, here it is:

https://youtu.be/cpraXaw7dyc?si=2rAvQkhwTFWkrNW1

These robots have only been getting better and better, at accelerating rates in recent years.

Hardware limits, a significant obstacle in the way of AI progress, are almost doubling yearly. With quantum computing on the horizon, I anticipate that that obstacle disappears completely.

Once robots like Tesla’s Optimus become more advanced, how long will it be until they coat them in bullet proof armor, arm them, and have them come after us? I used to dismiss ideas like this as science fiction, but now it seems like it’ll be reality within the next decade or two.

If so, is there any chance of turning back time and revolting? I don’t think so. If we let it get to that point the best we’d be able to do is hope for a miracle.
You are viewing a single comment's thread. View all
22
Supermatmike on scored.co
2 months ago 22 points (+0 / -0 / +22Score on mirror ) 3 children
what we call "AI" today is not something to be worried about, it's simply very complex algorithm code, it's not intelligent, it does not think. at a base level it's no different from Minecraft world generation.

We're no closer to AI today then we were in 1980.

I think it will actually prove to be a boon. rendering one's ability to find the truth in digital spaces an impossibility. soon no one will be able to tell fact from fiction on the internet. everyone will be forced to go back to real life.
Leporidae on scored.co
2 months ago 5 points (+0 / -0 / +5Score on mirror )
> "it's simply very complex algorithm code, it's not intelligent, it does not think"

It's not a complex algorithm. It's statistics, built on top of a staggeringly large dataset. It essentially does a playback of the thinking people used when writing the the terabytes of texts that was turned into hundreds of billions of rules like "if word A appears after word B then word C is more likely to follow" that an LLM consists of. Deeplearning models in other domains have similar kinds of rules, just on tokens that aren't word-fragments.

It's not intelligent, and doesn't think, and doesn't learn or have any form of memory. It gives the illusion of having a conversation by being fed all your previous questions and its replies every time you write a new question.

Despite it not being thinking, it is useful enough to solve a large number of real-world problems.
yudsfpbc on scored.co
2 months ago 2 points (+0 / -0 / +2Score on mirror ) 1 child
The first program to pass a Turing Test was a rule-based algorithm hand-written by smart people who had interesting responses to common phrases.
-2
llamatr0n on scored.co
2 months ago -2 points (+0 / -0 / -2Score on mirror )
no program has passed The Turing Test
pm300 on scored.co
2 months ago 1 point (+0 / -0 / +1Score on mirror ) 1 child
Did you watch the video?
Supermatmike on scored.co
2 months ago 1 point (+0 / -0 / +1Score on mirror ) 1 child
It's a robot.
I don't exactly see how that's cause for concern.
pm300 on scored.co
2 months ago 1 point (+0 / -0 / +1Score on mirror ) 3 children
Read the last two paragraphs of my post. It doesn’t even need to get to that point to be concerning. Would you want a couple of those bots armed with guns circling your neighborhood 24/7 for your “safety”?
Fabius on scored.co
2 months ago 2 points (+0 / -0 / +2Score on mirror ) 1 child
It will never happen.

Sam Hyde made a great point about the robot ascendency. It will never happen because the second a robot or self driving car accidentally kills a nigger, it's all shut down.
TakenusernameA on scored.co
2 months ago 1 point (+0 / -0 / +1Score on mirror )
Theyre already shutting it down because its been contradicting jewish narratives
yudsfpbc on scored.co
2 months ago 1 point (+0 / -0 / +1Score on mirror )
Did you hear about the time DARPA or whatever built an AI-powered automated sentry, and a bunch of low IQ marines were charged with getting close to it? Pretty much every one of them did. One of them just used a big cardboard box. Someone else beat it simply by wearing a mask and walking on all fours. Even basic camouflage confused the AI.
Supermatmike on scored.co
2 months ago 0 points (+0 / -0 ) 1 child
Well, to touch further on that, I don't really think that's as big of a problem as you're making it out to be. Firstly, armor is heavy. Any armor worth a damn would render a robot like that slow as a snail to outright stationary, and even if they equipped it with some sort of sci-fi bullshit armor that doesn't weigh anything, light it on fire or hit it with explosives, especially a shaped charge, and that thing is gonna be rendered into scrap metal pretty damn quick.

If it's the advanced detection tech you're worried about, do remember a couple months ago when a couple Marines fooled a DARPA detection AI by hiding under a cardboard box Solid Snake style.

And then, the most important detail I think on why this isn't a problem: cost. Sending out an army of militarized robots like this would be a monumental expense, from research and development, making sure these things even work to begin with, to actually building them, and then keeping them working in field conditions, mud, rain, long periods of sun exposure, extreme temperatures and terrain conditions, fielding an army of these things would be a financial and logistical nightmare.

As much as I hate to say it, it would be cheaper and dare I say MUCH more effective to just gather up all the niggers in the big cities, slap an AK in their hands and send them at us.
pm300 on scored.co
2 months ago 2 points (+0 / -0 / +2Score on mirror ) 2 children
On armor: The point is that these things are improving, technology is improving. In 1-2 decades, it is feasible that they will be able to move with armor on. Even if their movement is slow, they can still be deadly.

On advanced detection: China is already using it.

On cost: As mentioned, hardware quality is increasing very, very quickly. With that, so is scalability. I think in the near future, we are safe, but you can’t be sure about that in the long run.

On why robot militants are much more dangerous than human militants: Robots will have no qualms with murdering you and your whole family. They lack a moral compass, more so than even niggers dare I say.

I can see why you want to be optimistic though, the other perspective is very grim.
Toast message