Europol has supported authorities from 19 countries in a large-scale hit against child sexual exploitation that has led to 25 arrests worldwide. The suspects were part of a criminal group whose members were engaged in the distribution of images of minors fully generated by artificial intelligence (A...
Not going to read the article, but I will say that I understand making hyper-realistic fictional CP illegal, because it would make limiting actual CP impossible.
As long as it’s clearly fictional though, let people get off to whatever imaginary stuff they want to. We might find it disgusting, but there are plenty of sexual genres that most people would find disgusting b yet shouldn’t be illegal.
That's exactly how they work. According to many articles I've seen in the past, one of the most common models used for this purpose is Stable Diffusion. For all we know, this model was never fed with any CSAM materials, but it seems to be good enough for people to get off - which is exactly what matters.
If you train a model on 1,000,000 images of dogs and 1,000,000 images of cats, your output isn't going to be a 50/50 split of purely dogs and purely cats, it's going to be (on average) somewhere between a cat and a dog. At no point did you have to feed in pictures of dog-cat hybrids to end up with that model.
You could probably make some semi-realistic drawings and feed those in, and then re-train the model with those same images over and over until the model is biased to use the child-like properties of the drawings but the realism of the adult pictures. You could also feed the most CP-looking images from a partially trained model as the training data of another model, which over time would make the outputs approach the desired result.
It doesn't matter if it's accurate or not as long as pedos can get off to it, so just keep going until they can. According to our definition of what a pedophile is, though, it would likely be accurate.
Much as all in modern AI - it's able to train without much human intervention.
My point is, even if results are not perfectly accurate and resembling a child's body, they work. They are widely used, in fact, so widely that Europol made a giant issue out of it. People get off to whatever it manages to produce, and that's what matters.
I do not care about how accurate it is, because it's not me who consumes this content. I care about how efficient it is at curbing worse desires in pedophiles, because I care about safety of children.
i'm not, no. but i'm also well-enough versed in stable diffusion and loras that i know that even a model with no training on a particular topic can be made to produce it with enough tweaking, and if the results are bad you can plug in an extra model trained on at minimum 10-50 images to significantly improve them.
That said, there's a decent chance that existing models use real images, and that is what we should be fighting against. The user of a model has plausible deniability because there's a good chance they don't understand how they work, but the creators of the model should absolutely know where they're getting the source data from.
Prove that the models use illegal material and go after the model creators for that, because that's an actual crime. Don't go after people using the models who are providing alternatives to abusive material.
I think all are unethical, and any service offering should be shut down yes.
I never said prosecute the user's.
I said you can't make it ethically, because at some point, someone is using/creating original art and the odds of human explotations at some point in the chain are just too high.
the odds of human explotations at some point in the chain are just too high
We don't punish people based on odds. At least in the US, the standard is that they're guilty "beyond a reasonable doubt." As in, there's virtually no possibility that they didn't commit the crime. If there's a 90% chance someone is guilty, but a 10% chance they're completely innocent, most would agree that there's reasonable doubt, so they shouldn't be convicted.
If you can't prove that they made it unethically, and there are methods to make it ethically, then you have reasonable doubt. All the defense needs to do is demonstrate one such method of producing it ethically, and that creates reasonable doubt.
Services should only be shut down if they're doing something illegal. Prove that the images are generated using CSAM as source material and then shut down any service that refuses to remove it, or who can be proved as knowing "beyond a reasonable doubt" that they were committing a crime. That's how the law works, you only punish people you can prove "beyond a reasonable doubt" were committing a crime.
Let's say you manually edit a bunch of legal pictures and feed that into a model to generate new images. Or maybe you pull some legal images from other regions (e.g. topless children), and label some young-looking adults as children for the rest.
I don't know, I'm not an expert. But just because I don't know of something doesn't mean it doesn't exist, it means I need to consult experts.
It can’t.
Then prove it. That's how things are done in courts of law. Each side provides experts to try to convince the judge/jury that something did or did not happen.
My point is merely that an image that looks like CSAM is only CSAM if it actually involves abuse of a child. It's not CSAM if it's generated some other way, such as hand-drawing (e.g. hentai) or a model that doesn't use CSAM in its training data.
You can't prove a negative. That's not how prooving things work.
You also assume legal images. But that puts limits on what's actually legal globally. What if someone wants a 5 year old? How are there legal photos of that?
You can show how existing solutions work and demonstrate that the solution used works like those other solutions. That takes a lot more work than "see, it looks like a child therefore it's CSAM," but it's necessary to protect innocent people.
You assume it can, prove that it can.
That's guilty until proven innocent. There's a reason courts operate on the assumption of innocence and force the prosecution to prove guilt. I am not interested in reversing that.
You better believe when the cops come knocking, the burden of proof to be ethical is wholly on you.
All existing solutions are based on real life images. There's no ethically way to acquire thousand upon thousands of images of naked children to produce anything resembling real.
When the cops come knocking, your best bet is to comply under duress (be clear that it's under duress). Fighting the police will just add more charges, the right place to fight is in the courts. If your country's justice system is corrupt, then I guess you might as well fight the police, but in most developed countries, the courts are much more reasonable than the police.
how can it be done ethically?
The burden of proof is on showing that it was done unethically, not that it was done ethically. Force the prosecution to actually do their job, don't just assume someone is guilty because the thing they made looks illegal.