Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they’re extracting general patterns and concepts - the “Bob Dylan-ness” or “Hemingway-ness” - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in “vector space”. When generating new content, the AI isn’t recreating copyrighted works, but producing new expressions inspired by the concepts it’s learned.

This is fundamentally different from copying a book or song. It’s more like the long-standing artistic tradition of being influenced by others’ work. The law has always recognized that ideas themselves can’t be owned - only particular expressions of them.

Moreover, there’s precedent for this kind of use being considered “transformative” and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it’s understandable that creators feel uneasy about this new technology, labeling it “theft” is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn’t make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

  • EldritchFeminity@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 days ago

    The argument that these models learn in a way that’s similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.

    And these things don’t learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I’ve gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won’t be able to identify where a light source is because the shadows come from all different directions. These things don’t understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn’t even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      13 days ago

      Basing your argument around how the model or training system works doesn’t seem like the best way to frame your point to me. It invites a lot of mucking about in the details of how the systems do or don’t work, how humans learn, and what “learning” and “knowledge” actually are.

      I’m a human as far as I know, and it’s trivial for me to regurgitate my training data. I regularly say things that are either directly references to things I’ve heard, or accidentally copy them, sometimes with errors.
      Would you argue that I’m just a statistical collage of the things I’ve experienced, seen or read? My brain has as many copies of my training data in it as the AI model, namely zero, but “Captain Picard of the USS Enterprise sat down for a rousing game of chess with his friend Sherlock Holmes, and then Shakespeare came in dressed like Mickey mouse and said ‘to be or not to be, that is the question, for tis nobler in the heart’ or something”. Direct copies of someone else’s work, as well as multiple copyright infringements.
      I’m also shit at drawing with perspective. It comes across like a drunk toddler trying their hand at cubism.

      Arguing about how the model works or the deficiencies of it to justify treating it differently just invites fixing those issues and repeating the same conversation later. What if we make one that does work how humans do in your opinion? Or it properly actually extracts the information in a way that isn’t just statistically inferred patterns, whatever the distinction there is? Does that suddenly make it different?

      You don’t need to get bogged down in the muck of the technical to say that even if you conceed every technical point, we can still say that a non-sentient machine learning system can be held to different standards with regards to copyright law than a sentient person. A person gets to buy a book, read it, and then carry around that information in their head and use it however they want. Not-A-Person does not get to read a book and hold that information without consent of the author.
      Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.

      Computers think the same way boats swim. Arguing about the difference between hands and propellers misses the point that you don’t want a shrimp boat in your swimming pool. I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy, I care that it ruins the whole thing for the people it exists for in the first place.

      I think all the AI stuff is cool, fun and interesting. I also think that letting it train on everything regardless of the creators wishes has too much opportunity to make everything garbage. Same for letting it produce content that isn’t labeled or cited.
      If they can find a way to do and use the cool stuff without making things worse, they should focus on that.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        13 days ago

        Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.

        I agree, but the fact that shills for this technology are also wrong about it is at least interesting.

        Rhetorically speaking, I don’t know if that’s useless.

        I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy,

        I do like this point a lot.

        If they can find a way to do and use the cool stuff without making things worse, they should focus on that.

        I do miss when the likes of cleverbot was just a fun novelty on the Internet.

    • Eatspancakes84@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      I am also not really getting the argument. If I as a human want to learn a subject from a book I buy it ( or I go to a library who paid for it). If it’s similar to how humans learn, it should cost equally much.

      The issue is of course that it’s not at all similar to how humans learn. It needs VASTLY more data to produce something even remotely sensible. Develop AI that’s truly transformative, by making it as efficient as humans are in learning, and the cost of paying for copyright will be negligible.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        If I as a human want to learn a subject from a book I buy it ( or I go to a library who paid for it). If it’s similar to how humans learn, it should cost equally much.

        You’re on Lemmy where people casually says “piracy is morally the right thing to do”, so I’m not sure this argument works on this platform.

        • Eatspancakes84@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          13 days ago

          I know my way around the Jolly Roger myself. At the same time using copyrighted materials in a commercial setting (as OpenAI does) shouldn’t be free.

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 days ago

            Only if they are selling the output. I see it as more they are selling access to the service on a server farm, since running ChatGPT is not cheap.

            • Hamartia@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              13 days ago

              The usual cycle of tech-bro capitalism would put them currently on the early acquire market saturation stage. So it’s unlikely that they are currently charging what they will when they are established and have displaced lots of necessary occupations.

    • Riccosuave@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      13 days ago

      Even if they learned exactly like humans do, like so fucking what, right!? Humans have to pay EXORBITANT fees for higher education in this country. Arguing that your bot gets socialized education before the people do is fucking absurd.

      • v_krishna@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 days ago

        That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on

    • Dran@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Devil’s Advocate:

      How do we know that our brains don’t work the same way?

      Why would it matter that we learn differently than a program learns?

      Suppose someone has a photographic memory, should it be illegal for them to consume copyrighted works?

      • EldritchFeminity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        Because we’re talking pattern recognition levels of learning. At best, they’re the equivalent of parrots mimicking human speech. They take inputs and output data based on the statistical averages from their training sets - collaging pieces of their training into what they think is the right answer. And I use the word think here loosely, as this is the exact same process that the Gaussian blur tool in Photoshop uses.

        This matters in the context of the fact that these companies are trying to profit off of the output of these programs. If somebody with an eidetic memory is trying to sell pieces of works that they’ve consumed as their own - or even somebody copy-pasting bits from Clif Notes - then they should get in trouble; the same as these companies.

        Given A and B, we can understand C. But an LLM will only be able to give you AB, A(b), and B(a). And they’ve even been just spitting out A and B wholesale, proving that they retain their training data and will regurgitate the entirety of copyrighted material.

  • nek0d3r@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 days ago

    Generative AI does not work like this. They’re not like humans at all, it will regurgitate whatever input it receives, like how Google can’t stop Gemini from telling people to put glue in their pizza. If it really worked like that, there wouldn’t be these broad and extensive policies within tech companies about using it with company sensitive data like protection compliances. The day that a health insurance company manager says, “sure, you can feed Chat-GPT medical data” is the day I trust genAI.

  • lettruthout@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    13 days ago

    If they can base their business on stealing, then we can steal their AI services, right?

    • LibertyLizard@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      Pirating isn’t stealing but yes the collective works of humanity should belong to humanity, not some slimy cabal of venture capitalists.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        Yes, that’s exactly the point. It should belong to humanity, which means that anyone can use it to improve themselves. Or to create something nice for themselves or others. That’s exactly what AI companies are doing. And because it is not stealing, it is all still there for anyone else. Unless, of course, the copyrightists get there way.

        • ProstheticBrain@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          13 days ago

          ingredients to a recipe may well be subject to copyright, which is why food writers make sure their recipes are “unique” in some small way. Enough to make them different enough to avoid accusations of direct plagiarism.

          E: removed unnecessary snark

          • oxomoxo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            I think there is some confusion here between copyright and patent, similar in concept but legally exclusive. A person can copyright the order and selection of words used to express a recipe, but the recipe itself is not copy, it can however fall under patent law if proven to be unique enough, which is difficult to prove.

            So you can technically own the patent to a recipe keeping other companies from selling the product of a recipe, however anyone can make the recipe themselves, if you can acquire it and not resell it. However that recipe can be expressed in many different ways, each having their own copyright.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    13 days ago

    Here’s an experiment for you to try at home. Ask an AI model a question, copy a sentence or two of what they give back, and paste it into a search engine. The results may surprise you.

    And stop comparing AI to humans but then giving AI models more freedom. If I wrote a paper I’d need to cite my sources. Where the fuck are your sources ChatGPT? Oh right, we’re not allowed to see that but you can take whatever you want from us. Sounds fair.

    • azuth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      13 days ago

      It’s not a breach of copyright or other IP law not to cite sources on your paper.

      Getting your paper rejected for lacking sources is also not infringing in your freedom. Being forced to pay damages and delete your paper from any public space would be infringement of your freedom.

      • explore_broaden@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        I’m pretty sure that it’s true that citing sources isn’t really relevant to copyright violation, either you are violating or not. Saying where you copied from doesn’t change anything, but if you are using some ideas with your own analysis and words it isn’t a violation either way.

        • Eatspancakes84@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          With music this often ends up in civil court. Pretty sure the same can in theory happen for written texts, but the commercial value of most written texts is not worth the cost of litigation.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        13 days ago

        I mean, you’re not necessarily wrong. But that doesn’t change the fact that it’s still stealing, which was my point. Just because laws haven’t caught up to it yet doesn’t make it any less of a shitty thing to do.

        • azuth@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 days ago

          It’s not stealing, its not even ‘piracy’ which also is not stealing.

          Copyright laws need to be scaled back, to not criminalize socially accepted behavior, not expand.

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          13 days ago

          The original source material is still there. They just made a copy of it. If you think that’s stealing then online piracy is stealing as well.

          • TommySoda@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            12 days ago

            Well they make a profit off of it, so yes. I have nothing against piracy, but if you’re reselling it that’s a different story.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 days ago

              But piracy saves you money which is effectively the same as making a profit. Also, it’s not just that they’re selling other people’s work for profit. You’re also paying for the insane amount of computing power it takes to train and run the AI plus salaries of the workers etc.

        • Octopus1348@lemy.lol
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          When I analyze a melody I play on a piano, I see that it reflects the music I heard that day or sometimes, even music I heard and liked years ago.

          Having parts similar or a part that is (coincidentally) identical to a part from another song is not stealing and does not infringe upon any law.

          • takeda@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            You guys are missing a fundamental point. The copyright was created to protect an author for specific amount of time so somebody else doesn’t profit from their work essentially stealing their deserved revenue.

            LLM AI was created to do exactly that.

    • PixelProf@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      Not to fully argue against your point, but I do want to push back on the citations bit. Given the way an LLM is trained, it’s not really close to equivalent to me citing papers researched for a paper. That would be more akin to asking me to cite every piece of written or verbal media I’ve ever encountered as they all contributed in some small way to way that the words were formulated here.

      Now, if specific data were injected into the prompt, or maybe if it was fine-tuned on a small subset of highly specific data, I would agree those should be cited as they are being accessed more verbatim. The whole “magic” of LLMs was that it needed to cross a threshold of data, combined with the attentional mechanism, and then the network was pretty suddenly able to maintain coherent sentences structure. It was only with loads of varied data from many different sources that this really emerged.

    • fmstrat@lemmy.nowsci.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      This is the catch with OPs entire statement about transformation. Their premise is flawed, because the next most likely token is usually the same word the author of a work chose.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        12 days ago

        And that’s kinda my point. I understand that transformation is totally fine but these LLM literally copy and paste shit. And that’s still if you are comparing AI to people which I think is completely ridiculous. If anything these things are just more complicated search engines with half the usefulness. If I search online about how to change a tire I can find some reliable sources to do so. If I ask AI how to change a tire it would just spit something out that might not even be accurate and I’d have to search again afterwards just to make sure what it told me was even accurate.

        It’s just a word calculator based on information stolen from people without their consent. It has no original thought process so it has no way to transform anything. All it can do is copy and paste in different combinations.

  • lightnsfw@reddthat.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    If ChatGPT was free I might see their point but it’s not so no. If you’re making money from someone’s work you should pay them.

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      11 days ago

      You’re making an indie movie on your iPhone with friends. You sell one ticket. You now owe: Apple, Joseph Nicéphore Niépce’s estate (inventor of the camera), every cinematographer who first devised the type of shots you’re using, the writers since the beginning of time that created the types of story elements in the script, the mathematicians and scientists that developed lense technology, the car manufacturers that aided your ability to transport you to the set, the guy who’s YouTube tutorial you watched to figure out lighting, etc, etc, etc.

      Your black and white framing appears to provide a clear ethical framework until you dig a millimeter into it. The reality is that society only exists because of the work that all of the individuals within it produce. Things like copyright are an adapter to our capitalistic economy to ensure people’s work that can be copied, are protected enough that they have the opportunity to make money off of it. It exists so somebody else can’t immediately turn around and sell the same book someone else wrote, or just change a few words and do as such. This protection was meant to last 15 to 20 years. Then enter the public domain for anyone to copy and rewrite as they please.

      Current copyright is an utter bastardization of its intended use. Massive corporations are trying to act like they’re fighting for the little guy to own their IP forever. But they buy up all that IP for pennies compared to how they turn around and commoditize it. Then they own all of what society produces in perpetuity. They can sit on their dragon hoards and laugh as they gobble up any new creation that strays too close. And people wonder why everything is a sequel of a sequel of a sequel owned by massive corporations.

      • lightnsfw@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        I was trying to keep it simple.

        I would have paid them by purchasing the iphone and whatever software I used. I paid for the car that transported me. I would have paid for my education. People can also give their work away for free if they want, or be compensated by ads as in the case of Youtube or FOSS.

        Current copyright is an utter bastardization of its intended use. Massive corporations are trying to act like they’re fighting for the little guy to own their IP forever. But they buy up all that IP for pennies compared to how they turn around and commoditize it. Then they own all of what society produces in perpetuity. They can sit on their dragon hoards and laugh as they gobble up any new creation that strays too close. And people wonder why everything is a sequel of a sequel of a sequel owned by massive corporations.

        What do you think ChatGPT is trying to do? It’s already being used to churn out shitloads of garbage content. They’re not making things better.

        • Drewelite@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          11 days ago

          By that rationalization, OpenAI is paying their Internet bill, and for a copy of Dune, so they’re free to use any content they acquired to make their product better. Your original argument wasn’t akin to, “Shouldn’t someone using an iPhone pay for one?” It was “Shouldn’t Apple get a cut of everything made with the iPhone?”

          You could make the argument that people use ChatGPT to churn out garbage content, sure, but a lot of cinephiles would accuse your proverbial indie movie of being the same and blame Apple for creating the iPhone and enabling it. If you want to make that argument, go ahead. But don’t pretend it has anything to do with people getting paid fairly for what they made.

          ChatGPT is enabling people to make more things, easier, to get paid. And people, as always, are relying on everything that was created before them as a basis for their work. Same as when I go to school and the professor shows me lots of different works to learn from. The thousands of students in that class didn’t pay for any of that stuff. The professor distilled it and presented it and I paid him to do it.

          • lightnsfw@reddthat.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 days ago

            The problem is that they didn’t pay for the content they’ve acquired and they’re selling it to others. The creators are not being compensated and may not want to participate in AI development at all. If the creators agree to it then fine but most do not. Just look at what’s happening with art. People are scraping all of an artists work to create AI pictures in their style and impersonate them. That’s not okay.

  • arin@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    Kids pay for books, openAI should also pay for the material access used for training.

    • FatCat@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      13 days ago

      OpenAI like other AI companies keep their data sources confidential. But there are services and commercial databases for books that people understand are commonly used in the AI industry.

      • EddoWagt@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        OpenAI like other AI companies keep their data sources confidential.

        “We trained on absolutely everything, but we won’t tell them that because it will get us in a lot of trouble”

  • MentalEdge@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    13 days ago

    The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.

    The idea of a “teensy” exception so that we can “advance” into a dark age of creative pointlessness and regurgitated slop, where humans doing the fun part has been made “unnecessary” by the unstoppable progress of “thinking” machines, would be hilarious, if it weren’t depressing as fuck.

    • wagesj45@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      13 days ago

      The whole point of copyright in the first place, is to encourage creative expression

      …within a capitalistic framework.

      Humans are creative creatures and will express themselves regardless of economic incentives. We don’t have to transmute ideas into capital just because they have “value”.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        Sorry buddy, but that capitalistic framework is where we all have to exist for the forseeable future.

        Giving corporations more power is not going to help us end that.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        That’s the reason we got copyright, but I don’t think that’s the only reason we could want copyright.

        Two good reasons to want copyright:

        1. Accurate attribution
        2. Faithful reproduction

        Accurate attribution:

        Open source thrives on the notion that: if there’s a new problem to be solved, and it requires a new way of thinking to solve it, someone will start a project whose goal is not just to build new tools to solve the problem but also to attract other people who want to think about the problem together.

        If anyone can take the codebase and pretend to be the original author, that will splinter the conversation and degrade the ability of everyone to find each other and collaborate.

        In the past, this was pretty much impossible because you could check a search engine or social media to find the truth. But with enshittification and bots at every turn, that looks less and less guaranteed.

        Faithful reproduction:

        If I write a book and make some controversial claims, yet it still provokes a lot of interest, people might be inclined to publish slightly different versions to advance their own opinions.

        Maybe a version where I seem to be making an abhorrent argument, in an effort to mitigate my influence. Maybe a version where I make an argument that the rogue publisher finds more palatable, to use my popularity to boost their own arguments.

        This actually happened during the early days of publishing, by the way! It’s part of the reason we got copyright in the first place.

        And again, it seems like this would be impossible to get away with now, buuut… I’m not so sure anymore.

        Personally:

        I favor piracy in the sense that I think everyone has a right to witness culture even if they can’t afford the price of admission.

        And I favor remixing because the cultural conversation should be an active read-write two-way street, no just passive consumption.

        But I also favor some form of licensing, because I think we have a duty to respect the integrity of the work and the voice of the creator.

        I think AI training is very different from piracy. I’ve never downloaded a mega pack of songs and said to my friends “Listen to what I made!” I think anyone who compares OpenAI to pirates (favorably) is unwittingly helping the next set of feudal tech lords build a wall around the entirety of human creativity, and they won’t realize their mistake until the real toll booths open up.

        • EatATaco@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          I think AI training is very different from piracy. I’ve never downloaded a mega pack of songs and said to my friends “Listen to what I made!”

          I’ve never done this. But I have taken lessons from people for instruments, listened to bands I like, and then created and played songs that certainly are influences by all of that. I’ve also taken a lot of art classes, and studied other people’s painting styles and then created things from what I’ve learned, and said “look at what I made!” Which is far more akin to what AI is doing that what you are implying here.

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              13 days ago

              Because what they are describing is just straight up theft, while what I describes is so much closer to how one trains and ai. I’m afraid that what comes out of this ai hysteria is that copyright gets more strict and humans copying style even becomes illegal.

              • Rekorse@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                13 days ago

                Well that all doesn’t matter much. If AI is used to cause harm, it should be regulated. If that frustrates you then go get the laws changed that allow shitty companies to ruin good ideas.

              • kibiz0r@midwest.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                13 days ago

                I’m sympathetic to the reflexive impulse to defend OpenAI out of a fear that this whole thing results in even worse copyright law.

                I, too, think copyright law is already smothering the cultural conversation and we’re potentially only a couple of legislative acts away from having “property of Disney” emblazoned on our eyeballs.

                But don’t fall into their trap of seeing everything through the lens of copyright!

                We have other laws!

                We can attack OpenAI on antitrust, likeness rights, libel, privacy, and labor laws.

                Being critical of OpenAI doesn’t have to mean siding with the big IP bosses. Don’t accept that framing.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        I’d agree, but here’s one issue with that: we live in reality, not in a post-capitalist dreamworld.

        Creativity takes up a lot of time from the individual, while a lot of us are already working two or even three jobs, all on top of art. A lot of us have to heavily compromise on a lot of things, or even give up our dreams because we don’t have the time for that. Sure, you get the occasional “legendary metal guitarist practiced so much he even went to the toilet with a guitar”, but many are so tired from their main job, they instead just give up.

        Developing game while having a full-time job feels like crunching 24/7, while only around 4 is going towards that goal, which includes work done on my smartphone at my job. Others just outright give up. This shouldn’t be the normal for up and coming artists.

        • ClamDrinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Honestly, that’s why open source AI is such a good thing for small creatives. Hate it or love it, anyone wielding AI with the intention to make new expression will be much more safe and efficient to succeed until they can grow big enough to hire a team with specialists. People often look at those at the top but ignore the things that can grow from the bottom and actually create more creative expression.

          • ZILtoid1991@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            One issue is, many open source AI also tries to ape whatever the big ones are doing at the moment, with the most outrageous example is one that generates a timelapse for AI art.

            There’s also tools that especially were created with artists in mind, but they’re less popular due to the average person cannot use it as easily as the prompter machines, nor promise the end of “people with fake jobs” (boomers like generative AI for this reason).

      • Captain Aggravated@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        Humans are indeed creative by nature, we like making things. What we don’t naturally do is publish, broadcast and preserve our work.

        Society is iterative. What we build today, we build mostly out of what those who came before us built. We tell our versions of our forefathers’ stories, we build new and improved versions of our forefather’s machines.

        A purely capitalistic society would have infinite copyright and patent durations, this idea is mine, it belongs to me, no one can ever have it, my family and only my family will profit from it forever. Nothing ever improves because improving on an old idea devalues the old idea, and the landed gentry can’t allow that.

        A purely communist society immediately enters whatever anyone creates into the public domain. The guy who revolutionizes energy production making everyone’s lives better is paid the same as a janitor. So why go through all the effort? Just sweep the floors.

        At least as designed, our idea of copyright is a compromise. If you have an idea, we will grant you a limited time to exclusively profit from your idea. You may allow others to also profit at your discretion; you can grant licenses, but that’s up to you. After the time is up, your idea enters the public domain, and becomes the property and heritage of humanity, just like the Epic of Gilgamesh. Others are free to reproduce and iterate upon your ideas.

        • 31337@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          I think you have your janitor example backwards. Spending my time revolutionizing energy productions sounds much more enjoyable than sweeping floors. Same with designing an effective floor sweeping robot.

  • LANIK2000@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    This process is akin to how humans learn…

    I’m so fucking sick of people saying that. We have no fucking clue how humans LEARN. Aka gather understanding aka how cognition works or what it truly is. On the contrary we can deduce that it probably isn’t very close to human memory/learning/cognition/sentience (any other buzzword that are stands-ins for things we don’t understand yet), considering human memory is extremely lossy and tends to infer its own bias, as opposed to LLMs that do neither and religiously follow patters to their own fault.

    It’s quite literally a text prediction machine that started its life as a translator (and still does amazingly at that task), it just happens to turn out that general human language is a very powerful tool all on its own.

    I could go on and on as I usually do on lemmy about AI, but your argument is literally “Neural network is theoretically like the nervous system, therefore human”, I have no faith in getting through to you people.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      Even worse is, in order to further humanize machine learning systems, they often give them human-like names.

  • auzy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    As others have said, it isn’t inspired always, sometimes it literally just copies stuff.

    This feels like it was written by someone who invested their money in AI companies because they’re worried about their stocks

  • gcheliotis@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    13 days ago

    Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.

    AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.

    AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.

    Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.

    See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.

    TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology.

    Or maybe they’re not talking about copyright law. They’re talking about basic concepts. Maybe copyright law needs to be brought into the 21st century?

  • makyo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    I thought the larger point was that they’re using plenty of sources that do not lie in the public domain. Like if I download a textbook to read for a class instead of buying it - I could be proscecuted for stealing. And they’ve downloaded and read millions of books without paying for them.

  • mriormro@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    You know, those obsessed with pushing AI would do a lot better if they dropped the patronizing tone in every single one of their comments defending them.

    It’s always fun reading “but you just don’t understand”.

    • FatCrab@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      13 days ago

      On the other hand, it’s hard to have a serious discussion with people who insist that building a LLM or diffusion model amounts to copying pieces of material into an obfuscated database. And then having to deal with the typical reply after explanation is attempted of “that isn’t the point!” but without any elaboration strongly implies to me that some people just want to be pissy and don’t want to hear how they may have been manipulated into taking a pro-corporate, hyper-capitalist position on something.

      • mriormro@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        I love that the collectivist ideal of sharing all that we’ve created for the betterment of humanity is being twisted into this disgusting display of corporate greed and overreach. OpenAI doesn’t need shit. They don’t have an inherent right to exist but must constantly make the case for it’s existence.

        The bottom line is that if corporations need data that they themselves cannot create in order to build and sell a service then they must pay for it. One way or another.

        I see this all as parallels with how aquifers and water rights have been handled and I’d argue we’ve fucked that up as well.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          12 days ago

          Training data IS a massive industry already. You don’t see it because you probably don’t work in a field directly dealing with it. I work in medtech and millions and millions of dollars are spent acquiring training data every year. Should some new unique IP right be found on using otherwise legally rendered data to train AI, it is almost certainly going to be contracted away to hosting platforms via totally sound ToS and then further monetized such that only large and we’ll funded corporate entities can utilize it.

          • Eccitaze@yiffit.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            unique

            “unique new IP right?” Bruh you’re talking about basic fucking intellectual property law. Just because someone posts something publicly on the internet doesn’t mean that it can be used for whatever anybody likes. This is so well-established, that every major art gallery and social media website has a clause in their terms of service stating that you are granting them a license to redistribute that content. And most websites also explicitly state that when you upload your work to their site that you still retain your copyright of that work.

            For example (emphasis mine):

            FurAffinity:

            4.1 When you upload content to Fur Affinity via our services, you grant us a non-exclusive, worldwide, royalty-free, sublicensable, transferable right and license to use, host, store, cache, reproduce, publish, display (publicly or otherwise), perform (publicly or otherwise), distribute, transmit, modify, adapt, and create derivative works of, that content. These permissions are purely for the limited purposes of allowing us to provide our services in accordance with their functionality (hosting and display), improve them, and develop new services. These permissions do not transfer the rights of your content or allow us to create any deviations of that content outside the aforementioned purposes.

            Inkbunny:

            Posting Content

            You keep copyright of any content posted to Inkbunny. For us to provide these services to you, you grant Inkbunny non-exclusive, royalty-free license to use and archive your artwork in accordance with this agreement.

            When you submit artwork or other content to Inkbunny, you represent and warrant that:

            * you own copyright to the content, or that you have permission to use the content, and that you have the right to display, reproduce and sell the content. You license Inkbunny to use the content in accordance with this agreement;

            DeviantArt:

            1. Copyright in Your Content

            DeviantArt does not claim ownership rights in Your Content. For the sole purpose of enabling us to make your Content available through the Service, you grant DeviantArt a non-exclusive, royalty-free license to reproduce, distribute, re-format, store, prepare derivative works based on, and publicly display and perform Your Content. Please note that when you upload Content, third parties will be able to copy, distribute and display your Content using readily available tools on their computers for this purpose although other than by linking to your Content on DeviantArt any use by a third party of your Content could violate paragraph 4 of these Terms and Conditions unless the third party receives permission from you by license.

            e621:

            When you upload content to e621 via our services, you grant us a non-exclusive, worldwide, royalty-free, sublicensable, transferable right and license to use, host, store, cache, reproduce, publish, display (publicly or otherwise), perform (publicly or otherwise), distribute, transmit, downsample, convert, adapt, and create derivative works of, that content. These permissions are purely for the limited purposes of allowing us to provide our services in accordance with their functionality (hosting and display), improve them, and develop new services. These permissions do not transfer the rights of your content or allow us to create any deviations of that content outside the aforementioned purposes.

            Xitter:

            Your Rights and Grant of Rights in the Content

            You retain your rights to any Content you submit, post or display on or through the Services. What’s yours is yours — you own your Content (and your incorporated audio, photos and videos are considered part of the Content).

            By submitting, posting or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods now known or later developed (for clarity, these rights include, for example, curating, transforming, and translating). This license authorizes us to make your Content available to the rest of the world and to let others do the same.

            Facebook:

            The permissions you give us We need certain permissions from you to provide our services:

            • Permission to use content you create and share: Some content that you share or upload, such as photos or videos, may be protected by intellectual property laws.

            • You retain ownership of the intellectual property rights (things like copyright or trademarks) in any such content that you create and share on Facebook and other Meta Company Products you use. Nothing in these Terms takes away the rights you have to your own content. You are free to share your content with anyone else, wherever you want.

            • However, to provide our services we need you to give us some legal permissions (known as a “license”) to use this content. This is solely for the purposes of providing and improving our Products and services as described in Section 1 above.

            • Specifically, when you share, post, or upload content that is covered by intellectual property rights on or in connection with our Products, you grant us a non-exclusive, transferable, sub-licensable, royalty-free, and worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content (consistent with your privacy and application settings). This means, for example, that if you share a photo on Facebook, you give us permission to store, copy, and share it with others (again, consistent with your settings) such as Meta Products or service providers that support those products and services. This license will end when your content is deleted from our systems.

            I could go on, but I think I’ve made my point very clear: Every social media website and art gallery is built on an assumption that the person uploading art A) retains the copyright over the items they upload, B) that other people and organizations have NO rights to copyrighted works unless explicitly stated otherwise, and C) that 3rd parties accessing this material do not have any rights to uploaded works, since they never negotiated a license to use these works.

            • FatCrab@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              12 days ago

              You are misunderstanding what I’m getting at and unfortunately no this isn’t just straightforwardly copyright law whatsoever. The training content does not need to be copied. It isn’t saved in a database somewhere (as part of the training…downloading pirated texts is a whole other issue completely removed from the inherent processes of training a model), relationships are extracted from the material, however it is presented. So the copyright extends to the right of displaying the material in the first place. If your initial display/access to the training content is non-infringing, the mere extraction of relationships between components is not itself making a copy nor is it making a derivative work in any way we haven’t historically considered it. Effectively, it’s the difference between looking at material and making intensive notes of how different parts of the material relate to each other and looking at a material and reproducing as much of it as possible for your own records.

              • Eccitaze@yiffit.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                12 days ago

                FFS, the issue is not that the AI model “copies” the copyrighted works when it trains on them–I agree that after an AI model is trained, it does not meaningfully retain the copyrighted work. The problem is that the reproduction of the copyrighted work–i.e. downloading the work to the computer, and then using that reproduction as part of AI model training–is being done for a commercial purpose that infringes copyright.

                If I went to DeviantArt and downloaded a random piece of art to my hard drive for my own personal enjoyment, that is a non-infringing reproduction. If I then took that same piece of art, and uploaded it to a service that prints it on a T-shirt, the act of uploading it to the T-shirt printing service’s server would be infringing, since it is no longer being reproduced for personal enjoyment, but the unlawful reproduction of copyrighted material for commercial purpose. Similarly, if I downloaded a piece of art and used it to print my own T-shirts for sale, using all my own computers and equipment, that would also be infringing. This is straightforward, non-controversial copyright law.

                The exact same logic applies to AI training. You can try to camouflage the infringement with flowery language like “mere extraction of relationships between components,” but the purpose and intent behind AI companies reproducing copyrighted works via web scraping and downloading copyrighted data to their servers is to build and provide a commercial, for-profit service that is designed to replace the people whose work is being infringed. Full stop.

                • FatCrab@lemmy.one
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  edit-2
                  12 days ago

                  No, this is mostly incorrect, sorry. The commercial aspect of the reproduction is not relevant to whether it is an infringement–it is simply a factor in damages and Fair Use defense (an affirmative defense that presupposes infringement).

                  What you are getting at when it applies to this particular type of AI is effectively whether it would be a fair use, presupposing there is copying amounting to copyright infringement. And what I am saying is that, ignoring certain stupid behavior like torrenting a shit ton of text to keep a local store of training data, there is no copying happening as a matter of necessity. There may be copying as a matter of stupidity, but it isn’t necessary to the way the technology works.

                  Now, I know, you’re raging and swearing right now because you think that downloading the data into cache constitutes an unlawful copying–but it presumably does not if it is accessed like any other content on the internet. Because intent is not a part of what makes that a lawful or unlawful copying and once a lawful distribution is made, principles of exhaustion begin to kick in and we start getting into really nuanced areas of IP law that I don’t feel like delving into with my thumbs, but ultimate the point is that it isn’t “basic copyright law.” But if intent is determinitive of whether there is copying in the first place, how does that jive with an actor not making copies for themselves but rather accessing retained data in a third party’s cache after they grab the data for noncommercial purposes? Also, how does that make sense if the model is being trained for purely research purposes? And then perhaps that model is leveraged commercially after development? Your analysis, assuming it’s correct arguendo, leaves far too many outstanding substantive issues to be the ruling approach.

                  EDIT: also, if you download images from deviantart with the purpose of using them to make shirts or other commercial endeavor, that has no bearing on whether the download was infringing. Presumably, you downloaded via the tools provided by DA. The infringement happens when you reproduce the images for the commercial (though any redistribute is actually infringing) purpose.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          12 days ago

          I have no personal interest in the matter, tbh. But I want people to actually understand what they’re advocating for and what the downstream effects would inevitably be. Model training is not inherently infringing activity under current IP law. It just isn’t. Neither the law, legislative or judicial, nor the actual engineering and operations of these current models support at all a finding of infringement. Effectively, this means that new legislation needs to be made to handle the issue. Most are effectively advocating for an entirely new IP right in the form of a “right to learn from” which further assetizes ideas and intangibles such that we get further shuffled into endstage capitalism, which most advocates are also presumably against.

          • yamanii@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            I’m pretty sure most people are just mad that this is basically “rules for thee but not for me”, why should a company be free to pirate but I can’t? Case in point is the internet archive losing their case against a publisher. That’s the crux of the issue.

            • FatCrab@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              12 days ago

              I get that that’s how it feels given how it’s being reported, but the reality is that due to the way this sort of ML works, what internet archive does and what an arbitrary GPT does are completely different, with the former being an explicit and straightforward copy relying on Fair Use defense and the latter being the industrialized version of intensive note taking into a notebook full of such notes while reading a book. That the outputs of such models are totally devoid of IP protections actually makes a pretty big difference imo in their usefulness to the entities we’re most concerned about, but that certainly doesn’t address the economic dilemma of putting an entire sector of labor at risk in narrow areas.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      13 days ago

      Yep, its definitely not possible that nice small businesses like universal and sony would sue without an actual case in order to try and crush competitors with costs.

    • soul@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      13 days ago

      In the same way that a person can learn the material and also use that knowledge to potentially plagiarize it, though. It’s no different in that sense. What is different is the speed of learning and both the speed and capacity of recall. However, it doesn’t change the fundamental truths of OP’s explanation.

      Also, when you’re talking specifically about music, you’re talking about a very limited subset of note combinations that will sound pleasing to human ears. Additionally, even human composers commonly struggle to not simply accidentally reproduce others’ work, which is partly why the music industry is filled with constant copyright litigation.