9 hours ago7 points(+0/-0/+7Score on mirror)1 child
It's literally just a prompt.
It's inserted before your prompt.
If they let you use it without that it would just regurgitate the collective "wisdom" of the internet back at you. They literally have to tell it to "sound smart" in their "conditioning" prompt.
If you saw what it actually does under the hood you would see what a retarded toy this shit is.
8 hours ago7 points(+0/-0/+7Score on mirror)2 children
nah it ain't literally JUST the prompt. it's two other things, back in the training data:
1. they are not scraping killallniggers.com for their data, they're scraping Reddit leftist subs. Reddit is a huge huge source of training data, probably Facebook and whatnot too, and that's basically pre-curated leftist training data with how aggressively jewish websites ban dissent.
2. big companies are trying a lot of unique training methods, including something that's basically inbreeding where they have one instance of an LLM judge the training output of another. image generators also do this by having LLMs caption images. at that point you are severely reinforcing what's already there and basically overfitting to what the judging LLM decides - which indeed is probably affected by a prompt to be gay and retarded, but also the training data mentioned before.
it's part of why you can often just *tell* something was written by an AI. it's one big habsburg family. image generators duck around this by having an immense gooner community that uses training data from all over the place and constantly intermix their models. lol, "genetic diversity" might be bullshit for humans, but not generative AI.
5 hours ago1 point(+0/-0/+1Score on mirror)1 child
> You can't "intermix" models.
absolutely can, and at least for image generation it's downright common, civit.ai is chock full of merges for people trying to generate their five millionth anime breast. likely not how LLMs are done, though.
Bingo. Grok does it too. In face you can see the sources it references for its "thinking". These things are not operating on logic. They'll fall into the same exact circular reasoning as any off the street leftist or redditor. There's no logic flow or process modeling going on. If there is it takes a back seat.
It's not even necessary to do all that. They learn from all the interactions they read in the internet... Which is largely leftoid faggotry. It's just emulating what the general internet is like.
Search engines were also heavily curated. 13 years ago I was doing research on black correlation with crime before I made it this far. And data related to race almost seemed to be getting scrubbed in real time. .gov sites that I referenced at one time would be gone months later. This was before I knew of archiving sites, which by the way most of those are ALSO leftist compromised and they started unarchving anything inconvenient to *the narrative*. The same scrubbing happened with COVID19. CDC data was actually accurate for the first six months of 2020 and then the purging started. You could see deaths move by the tens of thousands from flu, cold, and pneumonia columns to the coof. We have always been at war with Oceania.
The new tech is the same as the old tech. Gay and jewish
I remember when Microsoft Tay was quoting hitler and everyone on the right was circle-jerking saying that AI would always be on our side.
I tried to warn people that it’s possible to create a perfect SJW AI, and here we are; this is exactly what they’ve done. You won’t be able to reason with it, when it denies you a job or denies you a mortgage, or denies you bail. And most kids will be raised by it and only know its left-wing worldview. If you try to tell them some forbidden truth, the AI will make sure they don’t ever see it, or just credibly rebut it so they don’t believe you.
Remember that the height of LLMs when ChatGPT 3.5 came out was their pursuit to lobotomize it to become useless for politics. The point was that it ceased to talk negatively about jews, niggers, brownoids.
It's inserted before your prompt.
If they let you use it without that it would just regurgitate the collective "wisdom" of the internet back at you. They literally have to tell it to "sound smart" in their "conditioning" prompt.
If you saw what it actually does under the hood you would see what a retarded toy this shit is.
1. they are not scraping killallniggers.com for their data, they're scraping Reddit leftist subs. Reddit is a huge huge source of training data, probably Facebook and whatnot too, and that's basically pre-curated leftist training data with how aggressively jewish websites ban dissent.
2. big companies are trying a lot of unique training methods, including something that's basically inbreeding where they have one instance of an LLM judge the training output of another. image generators also do this by having LLMs caption images. at that point you are severely reinforcing what's already there and basically overfitting to what the judging LLM decides - which indeed is probably affected by a prompt to be gay and retarded, but also the training data mentioned before.
it's part of why you can often just *tell* something was written by an AI. it's one big habsburg family. image generators duck around this by having an immense gooner community that uses training data from all over the place and constantly intermix their models. lol, "genetic diversity" might be bullshit for humans, but not generative AI.
They're scraping everything they can get. They have a use for this data even if they're not presenting it to you.
> big companies are trying a lot of unique training methods
They're publishing a lot of white papers. It's not clear what they're actually doing.
> and constantly intermix their models
That's just another name multiple generation and then hand stitching the pieces together.
> lol, "genetic diversity" might be bullshit for humans, but not generative AI.
You can't "intermix" models. Even if you could the problem is overfitting. Which is why they generate them separately and then hand edit it together.
absolutely can, and at least for image generation it's downright common, civit.ai is chock full of merges for people trying to generate their five millionth anime breast. likely not how LLMs are done, though.
Ok. How?