6 comments

  • falcor84 1 hour ago
    > "The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims," it says, "revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity."

    The word "inherently" there seems like a big stretch to me. I see how it could be harmful to them, but I also see an argument for how such AI generated material is a substitute for the actual CSAM. Has this actually been studied, or is it a taboo topic for policy research?

    • defrost 1 hour ago
      There's a legally challengable assertion there; "trained on CSAM images".

      I imagine an AI image generation model could be readily trained on images of adult soldiers at war and images of children from instagram and then be used to generate imagery of children at war.

      I have zero interest in defending exploitation of children, the assertion that children had to have been exploited in order to create images of children engaged in adult activities seems shaky. *

      * FWiW I'm sure there are AI models out there that were trained on actual real world CSAM .. it's the implied neccessity that's being questioned here.

      • jsheard 1 hour ago
        It is known that the LAION dataset underpinning foundation models like Stable Diffusion contained at least a few thousand instances of real-life CSAM at one point. I think you would be hard-pressed to prove that any model trained on internet scrapes definitively wasn't trained on any CSAM whatsoever.

        https://www.theverge.com/2023/12/20/24009418/generative-ai-i...

        • defrost 1 hour ago
          > I think you would be hard-pressed to prove that any model trained on internet scrapes definitively wasn't trained on any CSAM whatsoever.

          I'd be hard pressed to prove that you definitely hadn't killed anybody ever.

          Legally if it's asserted that these images are criminal because they are the result of being the product of an LLM trained on sources that contained CSAM then the requirement would be to prove that assertion.

          With text and speech you could prompt the model to exactly reproduce a Sarah Silverman monologue and assert that proves her content was used in the training set, etc.

          Here the defense would ask the prosecution to demonstrate how to extract a copy of original CSAM.

          But your point is well taken, it's likely most image generation programs of this nature have been fed at least one image that was borderline jailbait and likely at least one that was well below the line.

          • jsheard 23 minutes ago
            Unfortunately framing it in that way is essentially a get out of jail free card - anyone caught with CSAM can claim it was AI generated by a "clean" model, and how would the prosecution ever be able to prove that it wasn't?

            I get where you are coming from but it doesn't seem actionable in a way that doesn't effectively legalize CSAM possession, so I think courts will take the pragmatic approach of putting the burden of proof on the accused in cases like that. If you play with fire then you'd better have the receipts.

        • lazyasciiart 40 minutes ago
          Then all image generation models should be considered inherently harmful, no?
        • Hizonner 30 minutes ago
          I think you'd be hard-pressed to prove that a few thousand images (out of over 5 billion in the case of that particular data set) had any meaningful effect on the final model capabilities.
      • Hizonner 19 minutes ago
        > There's a legally challengable assertion there; "trained on CSAM images".

        "Legally challengable" only in a pretty tenuous sense that's unlikely to ever haven any actual impact.

        That'll be something that's recited as a legislative finding. It's not an element of the offense; nobody has to prove that "on this specific occasion the model was trained in this or that way".

        It could theoretically have some impact on a challenge to the constitutionality of the law... but only under pretty unlikely circumstances. First you'd have to get past the presumption that the legislature can make any law it likes regardless of whether it's right about the facts (which, in the US, probably means you have to get courts to take the law under strict scrutiny, which they hate to do). Then you have to prove that that factual claim was actually a critical reason for passing the law, and not just a random aside. Then you have to prove that it's actually false, overcoming a presumption that the legislature properly studied the issue. Then maybe it matters.

        I may have the exact structure of that a bit wrong, but that's the flavor of how these things play out.

    • ashleyn 4 minutes ago
      I always found these arguments to be contrived especially when it's already well-known that in the tradition of every Western government, there is no actual imperative for every crime to be linked directly to a victim. It's a far better argument to me, to suggest that the societal utility in using the material to identify and remove paedophiles before they have an actual victim far exceeds the utility of any sort of "freedom" to such material.
    • metalcrow 1 hour ago
      https://en.wikipedia.org/wiki/Relationship_between_child_por... is a good starting link on this. When i last checked, there were maybe 5 studies total (imagine how hard it is to get those approved by the ethics committees), all of which found different results, some totally the opposite of each other.

      Then again, it already seems clear that violent video games do not cause violence, and access to pornography does not increase sexual violence, so this case being the opposite would be unusual.

    • Hizonner 31 minutes ago
      The word "revictimizing" seems like an even bigger stretch. Assuming the output images don't actually look like them personally (and they won't), how exactly are they more victimized than anybody else in the training data? Those other people's likenesses are also "being used to generate AI CSAM images into perpetuity"... in a sort of attenuated way that's hard to even find if you're not desperately trying to come up with something.

      The cold fact is that people want to outlaw this stuff because they find it icky. Since they know it's not socially acceptable (quite yet) to say that, they tend to cast about wildly until they find something to say that sort of sounds like somebody is actually harmed. They don't think critically about it once they land on a "justification". You're not supposed to think critically about it either.

    • paulryanrogers 1 hour ago
      Benefiting from illegal acts is also a crime, even if indirect. Like getting a cheap stereo that happens to have been stolen.

      A case could also be made that the likenesses of the victims could retramatize them, especially if someone knew the connection and continued to produce similar output to taunt them.

    • willis936 1 hour ago
      It sounds like we should be asking "why is it okay that the people training the models have CSAM?" It's not like it's legal to have, let alone distribute in your for-profit tool.
      • wongarsu 37 minutes ago
        If you crawl any sufficiently large public collection of images you are bound to download some CSAM images by accident.

        Filtering out any images of beaten up naked 7 year olds is certainly something you should do. But if you go by the US legal definition of "any visual depiction of sexually explicit conduct involving a minor (someone under 18 years of age)" you are going to have a really hard time filtering all of that automatically. People don't suddenly look differently when they turn 18, and "sexually explicit" is a wide net open to interpretation.

      • wbl 1 hour ago
        Read the sentence again. It doesn't claim the data set has CSAM but that it depicts victims. It also assumes that you need AI to see an example to draw it on demand which isn't true.
        • grapesodaaaaa 41 minutes ago
          Yeah. I don’t like it, but I can see this getting overturned.
    • ilaksh 1 hour ago
      You probably have a point and I am not sure that these people know how image generation actually works.

      But regardless of a likely erroneous legal definition, it seems obvious that there needs to be a law in order to protect children. Because you can't tell.

      Just like there should be a law against abusing robots that look like extremely lifelike children in the future when that is possible. Or any kind of abuse of adult lifelike robots either.

      Because the behavior is too similar and it's too hard to tell the difference between real and imagined. So allowing the imaginary will lead to more of the real, sometimes without the person even knowing.

  • grapesodaaaaa 40 minutes ago
    Question for anyone that knows, but what happens if a 16 year old draws a naked picture of their 16 year old partner?
    • alwa 19 minutes ago
      Last I heard, depending on local attitudes, potentially jail. For that matter, minors can be charged for producing images of themselves. The CSAM laws in the US tend not to draw distinctions between first- and third-party production:

      https://www.ojp.gov/ncjrs/virtual-library/abstracts/blurring...

      https://www.innocentlivesfoundation.org/threat-of-sg-csam/

      https://www.findlaw.com/criminal/criminal-charges/child-porn...

    • Hizonner 13 minutes ago
      Depends on where you are. In some places in the world, into the gulag with them.

      In fact, they can get gulaged if they draw a picture of a totally imaginary 16 year old, let alone a real person. It's kind of in dispute in the US, but that'd be the way to bet. 'Course, in a lot of US states, they're also in trouble for having a "partner" at all.

      In practice, they'd probably have to distribute the picture somehow to draw the attention to get them arrested, and anyway they'd be a low priority compared to people producing, distributing, or even consuming, you know, real child porn... but if the right official decided to make an example of them, they'd be hosed.

  • danaris 7 minutes ago
    If it's AI-generated, it is fundamentally not CSAM.

    The reason we shifted to the terminology "CSAM", away from "child pornography", is specifically to indicate that it is Child Sexual Abuse Material: that is, an actual child was sexually abused to make it.

    You can call it child porn if you really want, but do not call something that never involved the abuse of a real, living, flesh-and-blood child "CSAM". (Or "CSEM"—"Exploitation" rather than "Abuse"—which is used in some circles.) This includes drawings, CG animations, written descriptions, videos where such acts are simulated with a consenting (or, tbh, non consenting—it can be horrific, illegal, and unquestionably sexual assault without being CSAM) adult, as well as anything AI-generated.

    These kinds of distinctions in terminology are important, and yes I will die on this hill.

  • leshokunin 26 minutes ago
    Between the 3D printed weapons and the AI CSAM, this year is already shaping up to be wild in terms of misuses of technology. I suppose that’s one downside of adoption.
  • adrr 1 hour ago
    Be interesting to see how this pans out in terms of the 1st amendment. Without a victim, it would be interesting to see how the courts rule. They could say its inherently unconstitutional but for sake of the general public, it is fine. This would be similar to the supreme court ruling on DUI checkpoints.
    • jrockway 44 minutes ago
      I think the victim is "the state". The law seems to say that by creating a model using CSAM, certain outputs are CSAM, and the law can say whatever it wants to some extent; you just have to convince the Supreme Court that you're right.
    • PessimalDecimal 49 minutes ago
      https://en.wikipedia.org/wiki/Legal_status_of_fictional_porn... comments a little on this and suggests that some of the existing laws have been tested in court already.

      Still, it's probably better not to be involve in a case like United States vs. $YOU and then have your name appear in this Wikipedia article.

  • empressplay 54 minutes ago
    In British Columbia both text-based accounts (real and fictional, such as stories) and drawings of underage sexual activity are illegal (basically any sort of depiction, even if it just comes out of your mouth.)

    So California is just starting to catch up.

    • Hizonner 6 minutes ago
      All of Canada. The Canadian Criminal Code is federal.

      I think they did carve out a judicial exception in a case where some guy was writing pure text stories for his own personal use, and didn't intentionally show them to anybody else at all, but I also seem to recall it was a pretty grudging exception for only those very specific circumstances.

    • userbinator 51 minutes ago
      Wrong country (at least for now...!)
      • skissane 40 minutes ago
        Not wrong country. In 2021, a Texas man was sentenced to 40 years in federal prison over CSAM text and drawings - https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...
        • metalcrow 24 minutes ago
          Specifically, this ruling does not make that kind of content illegal. This person was convicted under federal obscenity statues, which are....fuzzy to say the least. As Supreme Court justice Potter Stewart said, it's a "i know it when i see it" thing.

          Which in effect is basically a "you can go to jail if we think you're too gross" law.