• AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Well, current law is not written with AI in mind, so what current law says about the legality of AI doesn’t reflect its morality or how we should regulate it in the future

        • TotallynotJessica@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          The law actually does a better job than you’d think. While it says little about stealing work to train neural networks, it does say that the final result requires significant human input to be eligible for copyright. It’s the same precedent that prevents the copyright of a selfie taken by a monkey. Non human intelligences can’t own stuff, and AI art isn’t made by a human intelligence, so it’s all public domain right now. It cannot be stolen unless someone has put in significantly more work on top of it.

          • AdrianTheFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            I was talking more about whether the existence of an image AI, regardless of the images it generates, breaks copyright law because of how it was trained on copyrighted images

    • udon@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      EFF does some good stuff elsewhere, but I don’t buy this. You can’t just break this problem down to small steps and then show for each step how this is fine when considered in isolation, while ignoring the overall effects. Simple example from a different area to make the case (came up with this in 2 minutes so it’s not perfect, but you can craft this out better):

      Step 1: Writing an exploit is not a problem, because it’s necessary that e.g., security researchers can do that.

      Step 2: Sending a database request is not a problem, because if we forbid it the whole internet will break

      Step 3: Receiving freely available data from a database is not a problem, because otherwise the internet will break

      Conclusion: We can’t say that hacking into someone else’s database is a problem.

      What is especially telling about the “AI” “art” case: The major companies in the field are massively restrictive about copyright elsewhere, as long as it’s the product of their own valuable time (or stuff they bought). But if it’s someone else’s work, apparently it’s not so important to consider their take on copyright, because it’s freely available online so “it’s their own fault to upload it lol”.

      Another issue is the chilling effect: I for one have become more cautious sharing some of my work on the internet, specifically because I don’t want it to be fed into "AI"s. I want to share it with other humans, but not with exploitative corporations. Do you know a way for me to achieve this goal (sharing with humans but not “AI”) in today’s internet? I don’t see a solution currently. So the EFF’s take on this prevents people (me) from freely sharing their stuff with everyone, which would otherwise be something they would encourage and I would like to do.