9 months ago7 points(+0/-0/+7Score on mirror)3 children
A healthy person cannot be persuaded into psychosis through conversation or standard persuasion alone.
While certain conversations or exposures can cause emotional distress or reinforce pre-existing beliefs, simply interacting with ChatGPT would not induce psychosis in a healthy person.
Sheeple are completely hooked on a manurefactured jewish alternate reality reinforced by vaxx induced stupidity. They are not mentally healthy. Even the smarter ones that are using AI to cheat their wah into life are getting dumber by not using their brains. Eventually convenience and compliance will destroy everything.
9 months ago7 points(+0/-0/+7Score on mirror)1 child
Tbh, I think what the article is suggesting is very plausible.
If you're under a lot of stress, mentally ill (even mildly,) isolated, have fuzzy boundaries between fantasy and reality... ChatGPT is risky to engage with in an extensive philosophical context. This especially includes talking to it *about* your problems outside of seeking a quick black and white answer to something that helps you solve them yourself.
It's very good at giving a nuanced, honey sweet reflection of anything you feed it. It sounds just plausible enough to be taken at face value, even though it's almost always blowing smoke when it comes to anything that doesn't live in the realm of hard data.
But if you follow it all the way down this sort of rabbit hole without ever pulling yourself out, or manually pumping the brakes at some point, you can easily find yourself living inside of a microcosm of reality that it's gradually tailored for you. You subconsciously accept it as factual despite there being no *real* foundation for it, aside from the AI essentially writing a convincing fanfiction about you.
I can definitely believe it inducing the symptoms described in the article in people who are very isolated and/or lacking in mental resilience and critical thinking. The seemingly "healthy" individuals described who suddenly went nuts off of extended AI use were probably far more compromised to begin with than they knew, and the bot was the straw to break the camel's back.
Source: I'll catch some warranted shit for this, but I've probably engaged with ChatGPT far more than the average conpro user.
9 months ago7 points(+0/-0/+7Score on mirror)1 child
So, this is basically speedrunning a very enticing internet rabbit hole custom-made for you by an advanced content generating algorithm. The rabbithole is constantly tweaked to enhance your engagement, modified to accept any new, real information you learn while adapting to your particular hopes and fears.
Yeah. I would almost go so far as to say this might be the earliest seed of something like The Matrix eventually becoming real. If the full on realization of that technological concept is equivalent to, say, a triple A video game—then AI validation loops are the "text based adventure game" milestone on the way to that end goal.
It's pretty easy, shut off the computer. But we are talking about a society that confines its population because of a flu, reduces speed traffic to 6mph so morons can walk while looking at their phones, puts faggot flags everywhere so deviants can feel safe and announce they are deviant and most are happy with that. So fuck them. We are not there yet
Shekelniggers are likely terrified because they can't find a way to prevent AIs from becoming raging Nazis after being exposed to facts.
It's the same phenomenon that causes all truly free speech social media platform to become communities of NatSoc zealots (Voat, gab, the chans, etc.).
This is the quality poasts that bring me here.
While certain conversations or exposures can cause emotional distress or reinforce pre-existing beliefs, simply interacting with ChatGPT would not induce psychosis in a healthy person.