You are viewing a single comment's thread. View all
2
CatoTheElder on scored.co
9 days ago2 points(+0/-0/+2Score on mirror)2 children
It's not AI. AI can't do blood. AI makes doesn't place people on the edge of the frame. AI can't make the dye bottles. AI would make the water bottles have their front label visible. AI would not place the bloody man's left foot correctly because it goes behind an object. There is no requirement for this shoot to be at Bondi Beach, as you can make a twitter post from anywhere. There are no one legged men.
And finally you asked an LANGUAGE model about an image. Stop asking blind computers what they see.
According to Google, Google AI generated images now have hidden watermarks so if you ask Google's AI language model Gemini it can tell you if it has been made with by Google's AI imagie generator. I only used that because others were saying AI detectors said it not AI and I didnt believe that so I tried Gemini myself to see. Crucially, Gemini said "MOST or all of the image was EDITED OR GENERATED Google AI", so it has the AI hidden watermark but it may be an edit or a composite. His shirt and apperence is quite consistent with the Channel 9 interview. There is the possibility here that there was a image caught of him laughing having fake blood applied but everything else was added and this one pushed hard so it would fail authenticy checks and the original would be impossible to find (similar to how some UFO stories were quickly made into X-Files plots to discredit them and make searching for information on them impossible).
The water bottles are very consistent with AI now. It's no brand sold in Australia unless someone wants to prove me wrong, we have a very limited grocery market. I suggest you look more into AI image generating capabilities if you think that AI would do tables front on always or doesnt place people on the edge of frame (also the creator could have cropped it for any reason including to crop out a visible watermark or obvious AI errors). No reason why AI couldnt do the dye bottles. They dont look like the typical ones bought from dollar stores here but they are too generic for me to assert anything about confidently.
The "amputees" are the man who we should probably see some of his leg near the make up artist's sunglasses, and he has a blob hand. Another guy on the right has a blob hand too and nobody in that situation would be standing so with their legs so perfectly together that we couldnt see his other one. The guy on the left looking down doesn't have a face or maybe he actually has two faces on the side of his face but none where it should be.
And come on that mash of car-like pixels screams AI!
And finally you asked an LANGUAGE model about an image. Stop asking blind computers what they see.
The water bottles are very consistent with AI now. It's no brand sold in Australia unless someone wants to prove me wrong, we have a very limited grocery market. I suggest you look more into AI image generating capabilities if you think that AI would do tables front on always or doesnt place people on the edge of frame (also the creator could have cropped it for any reason including to crop out a visible watermark or obvious AI errors). No reason why AI couldnt do the dye bottles. They dont look like the typical ones bought from dollar stores here but they are too generic for me to assert anything about confidently.
The "amputees" are the man who we should probably see some of his leg near the make up artist's sunglasses, and he has a blob hand. Another guy on the right has a blob hand too and nobody in that situation would be standing so with their legs so perfectly together that we couldnt see his other one. The guy on the left looking down doesn't have a face or maybe he actually has two faces on the side of his face but none where it should be.
And come on that mash of car-like pixels screams AI!