I never understood how they were useful in the first place. But that's kind of beside the point. I assume this is referencing AI, but due to the fact that you've only posted one photo out of apparently four, I don't really have any idea what you're posting about.
The point of verification photos is to ensure that nsfw subreddits are only posting with consent. Many posts were just random nudes someone found, in which the subject was not ok with having them posted.
The verification photos show an intention to upload to the sub. A former partner wanting to upload revenge porn would not have access to a verification photo. They often require the paper be crumpled to make it infeasible to photoshop.
If an AI can generate a photorealistic verification picture, it cannot be used to verify anything.
I didn't realize they originated with verifying nsfw content. I'd only ever seen them in otherwise text-based contexts. It seemed to me the person in the photo didn't necessarily represent the account owner just because they were holding up a piece of paper showing the username. But if you're matching the verification against other photos, that makes more sense.
On a side note, they are also used all the time for online selling and trading, as a means to verify that the seller is a real person who is in fact in possession of the object they wish to sell.
How does traditional - as in before AI - photo verification knows the image was not manipulated? In this post the paper is super flat, and I've seen many others.
I had some trouble figuring out what exactly was going on as well, but the Stable Diffusion subreddit gave away that it was at least AI related, as that's one of the popular AI programs. It wasn't until I saw the tag though, that I really understood - Workflow Included. Meaning that the person included the steps they used to create the photo in question. Which means that the person in the photo was created using the AI program and is fake.
The implications of this sort of stuff are massive too. How long until people are using AI to generate incriminating evidence to get people arrested on false charges, or the opposite - creating false evidence to get away with murder.
Pretty sure it started because nsfw subreddit mods realized they demand naked pictures of women that nobody else had access to and it made their little mod become a big mod.
They were used extensively on 4chan, because they were the only way to prove that a person posting was in fact that person.and yes, it was mostly people posting nudes, but it was more that they wanted credit.
The reason it carried on to Reddit was because people were using the accounts to advertise patreon and onlyfans, and mods mostly wanted the people making money off the pictures to be the people who took those pictures.
Also it was useful for AMA posts and other such where a celebrity was involved.
AI video still looks like fever dreams. The AI can't keep consistent details, specially in the background, from frame to frame. There's always parts that morph and look like conjured up by Van Gogh during a maniacal delirium. Maybe in a couple of years and with some human grooming in the middle.
If somebody is going to go to all the trouble of fooling a human, they probably aren’t going to just start spamming random pictures on the community for an instant moderator ban.
It's gotten a lot better with teeth. Last I looked at that site they were very misaligned. It was very Uncanny Valley.
Edit: ok, this one's a bit whack:
Image: Close up of a man's mouth. The teeth look 2D, and continue endlessly in a straight row behind his lips; there is too little curvature to indicate they are connected to a jawbone.
GenAI made image of a verification post. The point i guess is that with genAI photos, anyone can easily make a fake verification post, making them less useful as a means to verify identity.
Even before Stable Diffusion or other publicly available AI generators, there was https://www.thispersondoesnotexist.com which generates a random photo of a human every time you reload the page.
Isn't there a trick where you can ask someone to do a specific hand gesture to get photos verified. That'll still work especially because AI makes fingers look wonky
AI has been able to do fingers for months now. It's moving very rapidly so it's hard to keep up. It doesn't do them perfectly 100% of the time, but that doesn't matter since you can just regenerate it until it gets it right.
You could probably just set up a time for the person to send a photo, and then give them a keyword to write on the paper, and they must send it within a very short time. Combine that with a weird gesture and it's going to be hard to get a convincing AI replica. Add another layer of difficulty and require photos from multiple angles doing the same things.
"For your verification please close left eye and run two fingers through your hair while eating a cauliflower with whipped cream. Attach a paperclip to your left ear and write your username on your forehead using an orange marker."
Some AI models have already nailed the fingers, this won't do anything. We need something that we can verify without having to trust the other person. I hate to say it but the block chain might be one of the best ways to authenticate users to avoid bots
Blockchains aren't exactly the best at proof of personhood. Usually all they can do is make masquerading as multiple people (a Sybil Attack) more expensive.
That's not to say interesting approaches haven't come out of blockchain-adjacent work, like https://passport.gitcoin.co/.
My discord friends had some easy ways to defeat this.
You could require multiple photos; it's pretty hard to get AI to consistently generate photos that are 100% perfect. There would bound to be things wrong with trying to get AI to generate multiple photos of the same (non-celeb) person that would make it obvious it's fake.
Another idea was to make it a short video instead of a still photo. For now, at least, AI absolutely sucks balls at making video.
What do you see? I'm not super observant, but she looks completely normal to me. I'm not sure what the black box is behind her, but maybe it's a fridge...
I'm confused. How would that help? The whole point of a verification post is that the username in the image matches the username posting the image. If you're just talking about Photoshop, then let's be clear about that. Otherwise, taking photos off social media is no different than someone just Photoshopping any other verification image, even of themselves.
Freud's mom now is unlikely to raise any bonner. I'm a man of the widest standards when it comes to that, but even I don't find a corpse rotting for a century a fap material. Even in lingerie.
Maybe it should just be a default policy to purposely alter the people in NSFW content that’s not associated with a professional who actively claims credit for that content. It’s all the same for the NSFW punters if the people in the material are unknown, but at least this way nonconsensual content is altered to protect someone’s privacy.
The idea of using a picture upload for automated verification is completely unviable. A much more commonly used system would be something like telling you to perform a random gesture on camera on the spot, like "turn your head slowly" or "open your mouth slowly" which would be trivial for a human to perform but near impossible for AI generators.
But then how will I astroturf (I mean, organically market) my current and future movies, like Golden Globe winning summer blockbuster, Barbie, now available on Blu-Ray and select streaming services, here if I get verified?
I don't know if I just have really good eyes for a 38 year old, but I can tell at first glance, within seconds, that this photo is AI generated. It's all about the lack of humanity in the subject's eyes
Or, and this is just a long shot, maybe you viewed the photo knowing it was AI generated and then worked backwards to create your own internal justification as to why you're uniquely gifted as detecting "humanity" in the eyes on webcam selfie photos.
There's still legitimate tells, but they're not obvious if you're not aware of the mistakes these models are prone to making. There's fewer mistakes left each generation, though.
I would definitely be deceived by this picture. I would not be able to tell at first glance but I understand about the lack of humanity around the eyes and ears.
It's far more than her eyes, she is bilaterally asymmetrical. With real people you can generally take a reflection of one side and it will look fairly close to the other. This woman has so much asymmetry it is off-putting. Her eyes are different heights and shapes, her cheek bones are different, the outer part of her nostrils are at different heights, her lip sides are shaped differently, her jawlines are different, her suprasternal notch(the divot at the base of the neck) is WILDLY different. The easiest thing to spot is her different skin tones. At first, you'll want to chalk it up to shading, but the light source isn't to her side but in front and to the upper right, that does not allow for such a radical change if you look at her forehead.
Have you never played with that tik tok filter that creates symmetrical faces? It's fun and surprising. We're all rather asymmetrical, and she doesn't look artificially so to me.
AI notes: make face and body images more symmetrical, but not 100%. Got it.
The only reason it hasn't done that yet is because it's not really AI, but large model probability with training feedback, and so far the feedback has enforced the "close enough" aspect. The next versions will cross the lines that still let us sense something isn't quite right.