I was speaking to my friend from undergrad who’s a PhD student at Stanford, he works on computer vision. I was amazed by the amount of progress purportedly being made in the field, and in AI more broadly.
He recommended I watch the demo of Tesla’s Optimus robot, here it is:
https://youtu.be/cpraXaw7dyc?si=2rAvQkhwTFWkrNW1
These robots have only been getting better and better, at accelerating rates in recent years.
Hardware limits, a significant obstacle in the way of AI progress, are almost doubling yearly. With quantum computing on the horizon, I anticipate that that obstacle disappears completely.
Once robots like Tesla’s Optimus become more advanced, how long will it be until they coat them in bullet proof armor, arm them, and have them come after us? I used to dismiss ideas like this as science fiction, but now it seems like it’ll be reality within the next decade or two.
If so, is there any chance of turning back time and revolting? I don’t think so. If we let it get to that point the best we’d be able to do is hope for a miracle.
We're no closer to AI today then we were in 1980.
I think it will actually prove to be a boon. rendering one's ability to find the truth in digital spaces an impossibility. soon no one will be able to tell fact from fiction on the internet. everyone will be forced to go back to real life.
It's not a complex algorithm. It's statistics, built on top of a staggeringly large dataset. It essentially does a playback of the thinking people used when writing the the terabytes of texts that was turned into hundreds of billions of rules like "if word A appears after word B then word C is more likely to follow" that an LLM consists of. Deeplearning models in other domains have similar kinds of rules, just on tokens that aren't word-fragments.
It's not intelligent, and doesn't think, and doesn't learn or have any form of memory. It gives the illusion of having a conversation by being fed all your previous questions and its replies every time you write a new question.
Despite it not being thinking, it is useful enough to solve a large number of real-world problems.
I don't exactly see how that's cause for concern.
Sam Hyde made a great point about the robot ascendency. It will never happen because the second a robot or self driving car accidentally kills a nigger, it's all shut down.
If it's the advanced detection tech you're worried about, do remember a couple months ago when a couple Marines fooled a DARPA detection AI by hiding under a cardboard box Solid Snake style.
And then, the most important detail I think on why this isn't a problem: cost. Sending out an army of militarized robots like this would be a monumental expense, from research and development, making sure these things even work to begin with, to actually building them, and then keeping them working in field conditions, mud, rain, long periods of sun exposure, extreme temperatures and terrain conditions, fielding an army of these things would be a financial and logistical nightmare.
As much as I hate to say it, it would be cheaper and dare I say MUCH more effective to just gather up all the niggers in the big cities, slap an AK in their hands and send them at us.
On advanced detection: China is already using it.
On cost: As mentioned, hardware quality is increasing very, very quickly. With that, so is scalability. I think in the near future, we are safe, but you can’t be sure about that in the long run.
On why robot militants are much more dangerous than human militants: Robots will have no qualms with murdering you and your whole family. They lack a moral compass, more so than even niggers dare I say.
I can see why you want to be optimistic though, the other perspective is very grim.