“Suno’s training data includes essentially all music files of reasonable quality that are accessible on the open internet.”
“Rather than trying to argue that Suno was not trained on copyrighted songs, the company is instead making a Fair Use argument to say that the law should allow for AI training on copyrighted works without permission or compensation.”
Archived (also bypass paywall): https://archive.ph/ivTGs
There’s nothing stopping you from going to youtube, listening to a bunch of hit country songs there, and using that inspiration to write a “hit country song about getting your balls caught in a screen door”. That music was free to access, and your ability to create derivative works is fully protected by copyright law.
So if that’s what the AI is doing, then it would be fully legal if it was a person. The question courts are trying to figure out is if AI should be treated like people when it comes to “learning” and creating derivative works.
I think there are good arguments to both sides of that issue. The big advantage of ruling against AI having those rights is that it means that record labels and other rights holders can get compensation for their content being used. The main disadvantage is that high cost barriers to training material will kill off open-source and small company AI, guaranteeing that generative AI is fully controlled by tech giant companies like Google, Microsoft, and Adobe.
I think the best legal outcome is one that attempts to protect both: companies and individuals below a certain revenue threshold (or other scale metrics) can freely train on the open web, but are required to track what was used for training. As they grow, there will be different tiers where they’re required to start paying for the content their model was trained on. Obviously this solution needs a lot of work before being a viable option, but I think something similar to this is the best way to both have competition in the AI space and make sure people get compensated.
I’m reminded of the Blue Man Group’s Complex Rock Tour. One of the major themes of the show is the contradiction in terms that is the “music industry.” That we tend to think of music as an artistic, ethereal thing that requires talent and inspiration…and yet we churn out pop music the same way we churn out cars and smart phones.
Inspiration ≠ mathematically derived similarity.
These aren’t artists giving their own rendering, these are venture capitalists using shiny tools to steal other people hard work.
4 chords.
“4 chords” is a cool mashup but it’s not really a valid point in this conversation.
The songs in “4 chords” don’t use the same 4 chords, because they are higher and lower than that. So you might say they use the same progression, but that’s not true either, because they’re not always constantly in the same order. So the best you can say is “it’s possible to interpret pitch- and tempo-adjusted excerpts of these songs back-to-back”, which isn’t a very strong claim.
In fact there’s a lot of things separating the songs in “4 chords”; such as structure, arrangement, rhythm, lyrics, or production. Another fact is that it’s perfectly possible to use these four chords in a way that you’ve never heard before and would likely find bizarre – it’s a bit of meme, but limitation really can breed creativity.
This isn’t to defend the lack of creativity in the big music industry. But there’s more to it than just saying “4 chords” to imply all musicians do is follow an established grid.
Actually, that music was based off of getting royalties and ad viewership. No one will pay for an ai to be exposed to an ad or pay royalties for an ai to hear a song. Or have an ai to hear a song for the chance of the ai buying merchandise or a concert ticket.
Taking other people’s creative works to create your own for-profit product is illegal in every way except when AI does it. AI is not a person watching videos. AI is a product using others’ content as its bricks and mortar. Thousands of hours of work on a project you completed being used by someone else to turn a profit, maybe even used in some way you vehemently disagree with, without giving you a dime is unethical and needs regulation from that perspective.
wrong.
You’re literally on a piracy server. You know about the laws and how hard the corpos crack down on us. Why the fuck are you licking the boots now
you’re wrong on the facts, this has nothing to do with supporting corporations.
Sure, jan
i’m principled, so i know that copying content is good and stopping people from copying is bad.
No, actually its completely legal to consume content that was uploaded to the internet and then use it as inspiration to create your own works.
Algorithms don’t have “inspiration”.
What is “inspiration” in your opinion and how would that differ from machine learning algorithms?
Taking other people’s creative works to create your own productive work is allowed if you are making a fair use. There’s a very good argument that use such as training a model on a work would be a fair use under the current test; being a transformative use, that replicates practically no actual part of the original piece in the finished work, that (arguably) does not serve as a replacement for that specific piece in the market.
Fair use is the cornerstone of remix art, of fan art, of huge swathes of musical genres. What we are witnessing is the birth of a new technique based on remixing and unfortunately this time around people are convinced that fighting on the side of big copyright is somehow the good thing for artists.
exactly how human culture progresses, and trying to stop it
That’s covered by section 107 of the US copyright law, and is actually fine and protected as free use in most cases. As long as the work isn’t a direct copy and instead changes the result to be something different.
All parody type music is protected in this way, whether it’s new lyrics to a song, or even something less “creative” like performing the lyrics of song A to the melody and style of song B.
Yea, I think only big media corporations would profit from such a copyright rule Average Joe’s creations will be scraped because he has no funding to prove and sue those big AI corporations
It should be fully legal because it’s still a person doing it. Like Cory Doctrow said in this article:
That’s all these models are, someone’s analysis of the training data in relation to each other, not the data itself. I feel like this is where most people get tripped up. Understanding how these things work makes it all obvious.
I think AI should be allowed ti use any available data, but it has to be made freely available e.g. by making it downloadable on huggingface
I think the solution is just that anything AI generated should be public domain.
That’s the current status quo.
I don’t know why you’re being downvoted. You’re absolutely correct (at least, in the US). And it seems to be based on pretty solid reasoning, so I could see a lot of other copyright offices following the same idea.
Source: https://www.copyright.gov/ai/ai_policy_guidance.pdf (See header II. The Human Authorship Requirement)
TL;DR
the Office states that “to qualify as a work of ‘authorship’ a work must be created by a human being” and that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”
Yes. Uncopyrightable = public domain. Copyright is not the default
If you use a tool, let’s say photoshop, to make an image, should it be of public domain?
Even if the user effort here is just the prompt, it’s still a tool used by an user.
Well, the AI doesn’t do all the work, you use public domain material (AI output) to create your own copyright protected product/art/thing etc.
All you have to do is put some human work into the creation. I guess the value of the result still correlates with the amount of human work one puts into a project.
If you roll a set of dice, do you own the number?
I don’t think it is a tool in the same sense that image editing software is.
But if for example you use a LLM to write an outline for something and you heavily edit it, then that’s transformative, and it’s owned by you.
The raw output isn’t yours, even though the prompt and final edited version are.
If you snap a photo of something, you own the photo (at least in the US).
There’s a solid argument that someone doing complex AI image generation has done way more to create the final product than someone snapping a quick pic with their phone.
One could also say that building a camera from first principles is a lot more work than entering a prompt in DALL-E, but using false equivalents isn’t going up get us very far.
I think a fairer comparison in that case would be the difficulty of building a camera vs the difficulty of building and programming an AI capable computer.
That doesn’t really make sense either way though, no one is building their camera/computer from raw materials and then arguing that gives them better intellectual rights.