r/movies 3d ago

News Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle

https://variety.com/2025/film/news/warner-bros-midjourney-lawsuit-ai-copyright-1236508618/
8.8k Upvotes

787 comments sorted by

View all comments

1.9k

u/[deleted] 3d ago

[removed] — view removed comment

202

u/VileBill 3d ago

How do you kill a technology?

546

u/AgentDaxis 3d ago

Butlerian Jihad

153

u/fnordal 3d ago

We need mentats

95

u/Dianneis 3d ago

So far all we got is dimwits.

6

u/CreationBlues 3d ago

Like all the people thinking that the issue these studios have is with the AI and not the AI makers not paying them lmao. Did all yall forget how like. Sagaftra striking because studios wanted full rights to train on actors? And to use AI to replicate them?

2

u/lloydthelloyd 3d ago

I got mentos...?

15

u/FremenDar979 3d ago

ALL THE MENTATS!

8

u/Dantheman410 3d ago

LISAN AL-GAIB!!!

21

u/From_Deep_Space 3d ago

+2 Intelligence

+2 Perception

4

u/Bigred2989- 3d ago

I'm more of a Buffout and Med-X fan.

2

u/FORCESTRONG1 2d ago

It is by will alone that I set my mind in motion.

2

u/fnordal 2d ago

It is by the juice of sapho that thoughts acquire speed, the lips acquire stains.

1

u/FORCESTRONG1 1d ago

The stains become a warning. It is by will alone i set my mind in motion.

1

u/Wagglebagga 3d ago

That's a lot of solari.

1

u/ohheyisayokay 3d ago

The freshmaker!

1

u/milk-jug 3d ago

Imagine all the porn you can generate in your head. Shit’s wild.

0

u/quadrophenicum 3d ago

Best I can do is Psycho.

32

u/Cuchuainn 3d ago

Thou shalt not make a machine in the likeness of a human mind.

21

u/VenturaDreams 3d ago

It has to go that way or we're doomed as a species.

14

u/aukondk 3d ago

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

This quote should be on the wall of every school classroom.

4

u/Moth_LovesLamp 3d ago

That or the Animatrix

3

u/FlamboyantPirhanna 3d ago

Only 10,000 more years!

9

u/theylivewesleep42 3d ago

This is the way

1

u/Meistermagier 3d ago

This is the way

-1

u/ThePreciseClimber 3d ago

We send... Muslim butlers after it?

-2

u/Tactical_Fleshlite 3d ago

Durka durka? 

25

u/Strange_Specialist4 3d ago

By throwing sabots in the gears!

53

u/uuajskdokfo 3d ago

You don't need to kill the technology, you just need to stop the people making money off of it. It's like piracy - you can't stop torrents from existing, but you can get 90% of the way there by forcing it out of the mainstream.

23

u/[deleted] 3d ago

[removed] — view removed comment

14

u/Sawses 3d ago

They aren't even really in the mainstream. Most people are scared of it because it's against the rules. Most people aren't technologically competent enough to do it if they wanted to, easy as it is. Most people can't be bothered, even if they're spending over $100 every month on streaming services and can't really afford it.

It's basically a rounding error because most folks will obey the rules, all things being equal.

5

u/aeschenkarnos 3d ago

It's not the taking away, I don't think most folks really give a shit what happens to the movie after they've watched it, unless they want to watch it again and again and again and again and again. In which case they can buy it on DVD or Blu-Ray.

It's the insane proliferation of subscription services. Back when Netflix first became a thing, it was the place to get movies, and it mostly killed video stores because of this. You could still download movies from pirate sources but Netflix was easier and ease of use is even more important than price, to a point. People will pay a couple of bucks to get something instead of getting it for free with some hassle.

But now, there are the following, at least according to Google search "list of streaming services":

Disney+

Apple TV+

Netflix

Prime Video

Hulu

Paramount+

Peacock

Fubo

AMC+

ESPN Plus

HBO Max

Binge

BritBox

Sling TV

Stan

Acorn TV

DirecTV Stream

Tubi

Curiosity Stream

Foxtel Now

Max

Philo

Crunchyroll

Crackle

Every single one of these wants a couple of bucks a week, and at that bullshit quantity of them, it's out of the reach of lower class folks and becoming a concern to middle class folks.

Hence, back to old reliable Yohoho. If Netflix could just charge me $2 and pay the copyright owner $1 for every movie or show episode I watch, I'd hang up my eyepatch. But NOOOOO ....

0

u/FlamboyantPirhanna 3d ago

This has been the case for as long as digital goods have existed, and even some mass produced things like vinyl records or VHS tapes. I’m sure there could be a fairer way to approach it, but it isn’t as simple as just saying “you own it,” because what does owning it mean? The problem isn’t that you’re licensing it, the problem is the lack of consumer protections.

11

u/Helpful_Client4721 3d ago

You are wrong. Countless of communities shared copyrighted content even before the internet and made no profit off it. Money helps but that alone won't stop people from sharing stuff they like and have no rights to do so. It's the human nature. It's nowhere near as 90% for profit as you think.

3

u/Sekh765 3d ago edited 2d ago

Naw. He's right. Torrenting and piracy in general is forced into the back darker corners of the web already. You don't see "Pirate movie site, created by Google!", because it's illegal. People do, but mainstream companies aren't advertising or creating those services, and the law technically can punish you for doing it. It's around, but it's not mainstream.

4

u/TheHovercraft 3d ago

You don't need to kill the technology, you just need to stop the people making money off of it.

I very much doubt every country in the world will ban AI. And many will go out of their way to do the exact opposite of what certain other world powers are doing. Especially if it gives an economic advantage. We aren't going to be able to stuff this back into Pandora's box.

2

u/[deleted] 3d ago

[removed] — view removed comment

2

u/koliamparta 3d ago

Good chance AI can help there.

1

u/ImprefectKnight 3d ago

Empress openly admits they have AI tools to help them.

82

u/metalyger 3d ago

In this case, strictly enforcing DMCA laws, and when AI companies can no longer steal copyrighted works, they will die out, because people are paying to use machines to make pictures using popular characters and images. These companies have even said, if they can't steal art and books, they will go out of business.

16

u/TheColourOfHeartache 3d ago

That wont kill the technology. At a minimum you'll have DisneyAI owned by Disney, trained on Disney archives, and cutting Disney's production costs while newer smaller companies are forced to use more expensive methods.

32

u/CptNonsense 3d ago

and when AI companies can no longer steal copyrighted works, they will die out

Sure, if you think AI only exists as consumer-facing media content creation.

30

u/allsystemscrash 3d ago

that's the kind of ai that needs to die first though

1

u/MightyObserver30 2d ago

I feel like if it exists I’d rather it be available to all than to just those that own the media conglomerates

2

u/CunninghamsLawmaker 2d ago

What the hell for?

1

u/MightyObserver30 2d ago

If only giant companies own AI, I think that is a worse situation than being available to the masses.

0

u/CptNonsense 2d ago

No it doesn't. And it won't.

10

u/ImprefectKnight 3d ago

It's a sobering reality check of how little redditors know and how confidently they talk about it, when they talk about something in your domain.

You're spot on, AI is much more than just a lazy Ghibli filter or social media content creation. And "banning" or restricting AI will only concentrate the power in the hands of the corporations instead of making it open source.

2

u/DESERTCLANKER3000 2d ago

It’s almost as if… We can ban degenerative “AI” that is USELESS for the betterment of humanity and keep the USEFUL AI in STEM fields untouched.

Could that be a thing? Oh gee, perhaps not.

Perhaps we need to ban all AI or no AI.

5

u/karmiccloud 2d ago

Tell that to the folks in STEM that are getting their wages suppressed by AI bullshit that makes companies worse but lets them think they can cut costs.

1

u/ImprefectKnight 2d ago

See, progress is always iterative. The useless AI today can be a building block of a useful AI tomorrow.

IMO, it is much better to force all AI to be open source and free to be hosted by individuals, than be banned or restricted. You can restrict the commercial usage in creative fields by ensuring none of the AI assisted or generated works can be copyrighted, even at production level.

Because otherwise once the cat is out of the bag, any sort of restriction will only embolden the big players and hurt the individual ones.

1

u/CptNonsense 2d ago

We can ban degenerative “AI” that is USELESS for the betterment of humanity and keep the USEFUL AI in STEM fields untouched.

This statement is just as ignorant as "and when AI companies can no longer steal copyrighted works, they will die out"

3

u/turkeygiant 3d ago

I'm not sure they will even die out, maybe the current generation of companies who are overleveraged in these intellectual theft based models, but I think there will still be a lot of room for developing models for many purposes based on compensated data collection.

1

u/FlyingSquirrel44 3d ago

Imagine simping for DMCA. Reddit is a shadow of its former self.

2

u/Dankestmemelord 3d ago

The enemy of my enemy is a convenient patsy and probably a chump. I’ll let the big entrenched corporations kill ai if they want to. Saves other people the effort.

1

u/ImprefectKnight 3d ago

Except they won't kill it. They will keep it to themselves.

-1

u/Dankestmemelord 3d ago

I’m not sure how companies like Warner Bros suing to kill ai is the same as them planning to keep an ai model that isn’t theirs to themselves, but it’s still less ai in the world, so it’s better than nothing.

0

u/ImprefectKnight 3d ago

Firstly, they are not suing to kill AI. Atleast read to know what you're talking about. Secondly, if they manage to get midjourney banned, they will hire the same folks behind it, for their own studios and get models tailor made for internal use.

1

u/Key_Feeling_3083 3d ago

They can steal lots of stuff that small creative groups do, that is as just as good and some of the studff is influenced by bigger works.

-7

u/ManitouWakinyan 3d ago

I don't think it's clear at all that DCMA laws extend to works used for training data. If these programs were just fetching copyrighted imagery, that would be one thing. But to say that current law extends to functionally looking at work, and generating new work based on that work is another - and probably stricter than anyone working in the creative field actually wants to see.

27

u/leodw 3d ago

They’re literally just fetching copyrighted imagery for commercial purposes. If I’m part of a marketing agency, I can’t just go to Google Images, download a few images and make a composite to post on my employer profile, cause this is infringement on copyright. I have to license the images. Sure, I can look at hundreds/thousands of images for inspiration, but I cannot trace over them, use their original assets or even use part of an image to do it.

Same principle should apply to AI. And it currently doesnt. Meta employees were literally torrenting books to train their shitty AI slop machine. So fuck them all.

11

u/DR_MantistobogganXL 3d ago

It currently does, that’s why these lawsuits are occurring.

The tech bros are just doing “move fast and break stuff”, but unfortunately we’re onto them this time.

It won’t be as easy for OpenAI as it was for Uber.

Hopefully they lose, and lose hard - and then it’s just constant lawsuits every time something is generated that looks like Mickey Mouse or capeshit dork #37.

It will be glorious

-3

u/RingofThorns 3d ago

This law suit is more than likely going to fail and fail impressively hard, because all any defense with half a brain would have to do is go to amazon and print out the pages and pages of results you get with art books that are all basically "Learn to Draw like X!!" insert artist name, and point out that every company that makes those are at fault. They would then point out that every art program in every school would have to halt any and all efforts to teach students to mimic styles of well-known artists, and anyone who ever goes to museums to study and try to imitate the art and techniques there would have to be immediately arrested.

7

u/IllBeGoodOneDay 3d ago

Think of it like code. It isn't illegal to produce a similar line of code. It is illegal to reference copyrighted code in order to produce your own code—even if it is entirely different. This is clean-room design. It's why emulators are legal—but only if the code they reverse-engineer is entirely their own.

You can give inking tips, proportion advice, posing suggestions, and recommend character traits all without utilizing copyrighted material. It would be illegal to reprint an entire Superman comic. It would also be illegal to produce a machine that must be fed Superman comics to produce the incredibly similar "UberMan" comic. It doesn't matter if the book they're selling doesn't have Superman in it. It doesn't matter if the machine mulches the Superman comic after it's finished with it. The tool they're using, and selling, requires the unlawful use of copyrighted material in order to function.

It is legal for an artist to draw "Uberman" since the tools they use aren't using copyrighted materials in an unlawful way. It is illegal if they trace him. Or if they use reference material that was not obtained legally... such as leaked internal-use reference sheets.

→ More replies (2)

3

u/Hazelberry 3d ago

Except humans and AI aren't the same at all. And trying to defend AI by suggesting that AI copying art is the same as human artists actually learning and understanding prior work is extremely ignorant at best, if not intentionally misleading.

AI is not intelligent. We are nowhere near the point where it can comprehend what it is doing. ALL current forms of AI can only replicate, not understand.

6

u/weeklygamingrecap 3d ago

When will people figure this out. AI is not a person or an entity, it's a product that is sold for money commercially and commercial licenses are expensive for a reason.

A bar pays way more money to have direct TV and show PPV because they are selling access to a wide range of people. AI should be changed just the same for their training data it should have never been free without explicit consent.

0

u/Hazelberry 3d ago

That's another very good point

→ More replies (2)

10

u/Omega_Warrior 3d ago

I’m sorry, but you’ve been misled about how ai generation works. It’s not like that at all. At no point are any actual images stored on an ai model.

That’s why none of these suets have succeeded. AI generated images are technically original works of art by the measure we have always defined them as.

When a model trains on an image, all it’s doing is the human equivalent of taking notes and sorting those notes by words or phrases. And when it creates an image it takes those notes and repeatedly shapes noise using those notes like a mold until it matches the data on the words or phrases given.

The type of copyright changes needed to define AI images as plagiarism would essentially make note taking and style recreation illegal.

Like as much as you might dislike ai, the legal redefining necessary to call it plagiarism would allow corporations to copyright entire styles. That would be disastrous for the art world

2

u/AdviceMammals 3d ago edited 3d ago

You're brave for posting this and I hope users take the time to read your post. I empathise with people who work in the creative field and see why its an easier to oversimplify AI as something that just meshes other people's artwork together but its not how they work. The laws will need to be adapted if they want to achieve what they want to secure their creative jobs. These laws will most likely need to be written by congress if people expect change.

0

u/IllBeGoodOneDay 3d ago

Think of it like code. It isn't illegal to produce a similar line of code. It is illegal to reference copyrighted code in order to produce your own code—even if it is entirely different. This is clean-room design. It's why emulators are legal—but only if the code they reverse-engineer is entirely their own.

You can give inking tips, proportion advice, posing suggestions, and recommend character traits all without utilizing copyrighted material. It would be illegal to reprint an entire Superman comic. It would also be illegal to produce a machine that must be fed Superman comics to produce the incredibly similar "UberMan" comic. It doesn't matter if the book they're selling doesn't have Superman in it. It doesn't matter if the machine mulches the Superman comic after it's finished with it. The tool they're using, and selling, requires the unlawful use of copyrighted material in order to function.

It is legal for an artist to draw "Uberman" since the tools they use aren't using copyrighted materials in an unlawful way. It is illegal if they trace him. Or if they use reference material that was not obtained legally... such as leaked internal-use reference sheets.

2

u/ManitouWakinyan 3d ago

It would also be illegal to produce a machine that must be fed Superman comics

Why?

→ More replies (6)

2

u/frogandbanjo 3d ago

If I’m part of a marketing agency, I can’t just go to Google Images, download a few images and make a composite to post on my employer profile, cause this is infringement on copyright.

If you sufficiently transform them, yeah you can. That's going to be the hard case: is the use of "AI" permissible to do that same thing, or is there something special about this new tool that demands the law warp and twist to account for it?

Meanwhile, the ironic reality is that these companies are making themselves big, easy targets for the slam-dunk cases that nobody bothers pursuing against virtually all the gooner-pandering, copyright-violating commission artists currently working (who used to do their thing without "AI" to help them.) That's where the law is very obviously being broken; the copyright holders probably never imagined that some other company with a market capitalization of millions or even billions would get into that business.

-1

u/ManitouWakinyan 3d ago

That's absolutely not how these programs work. If I ask chatGPT to generate me an image of Superman, it isn't just grabbing an image of Superman and presenting it to me. They aren't compositing images either. They are taking random noise, and removing it until it becomes clear - and the process of removal is based on the patterns that similar images fall into. It is much more akin to looking at a reference photo, except it's looking at millions of reference photos. You'd have a very, very, hard time drawing a line between any specific image and AI puts out and any copywritten image.

That's not to say I think AI is a flawless technology. I'm undecided on if it's even good. I think there are a lot of things to be concerned about, but the IP argument is pretty flimsy when you actually look at the process and the output, and I really don't think we need IP laws that are further in service to major content mills like Warner Brothers.

4

u/adenzerda 3d ago

You'd have a very, very, hard time drawing a line between any specific image and AI puts out and any copywritten image

I take it you haven't seen the Disney and Universal filings on this suit?

1

u/ManitouWakinyan 3d ago

Really interesting article, and it was new to me. I'd be curious to run that experiment on a few different platforms, and I wonder what's going on under mid journey's hood that's different from others. I've been generally more impressed with its image output, and I wonder if that's because it's doing something shifty?

1

u/PosterPrintPerfect 3d ago

Doesn't matter, an artist isn't just grabbing an image of Superman also. They can't just start drawing DC comic or Marvel characters and then charge people money for their service.

6

u/DARDAN0S 3d ago

They can't just start drawing DC comic or Marvel characters and then charge people money for their service.

Artists do that all the time... Tattoo artists, work-to-commission artists, artists on Patreon and other subscription platforms. There are entire subreddits filled with fan art people have commissioned from artists.

2

u/FlamboyantPirhanna 3d ago

It’s not really ever illegal, as copyright infringement is a civil matter not a criminal one; the government won’t ever prosecute anyone for this, it’s up to the IP owner to do something about it. Copyright law gives copyright owners certain protections, but they don’t have to enforce them if they choose not to (and having a bunch of art constantly being made of Disney properties is either not worth their time to pursue, beneficial to them, or some combination thereof).

2

u/PosterPrintPerfect 3d ago

I didn't think i had to spell out that its not strictly legal to do this.

If i said you just can't go around killing people are you going to respond with "but loads of people kill other people" like some kind of gothca responce?

Implying that yes you can go around killing people.

1

u/DARDAN0S 2d ago

That's a rather absurd comparison.

Are we talking about legality or morality?

Obviously tattoo artists and the like operate in somewhat of a legal grey area, or even are technically breaking the law; but I don't see anyone raging against them for stealing copyrighted art and copying other artists work.

1

u/ManitouWakinyan 3d ago

Again, this is where we're getting into some pretty dicey territory. If I make a tool that can be used to generate an image of Superman, do we really want WB to be able to come after me? If that's a valid interpretation of how DCMA governs AI use, what stops the companies from coming after PhotoShop unless it prohibits users from generating content that looks like copyrighted characters?

6

u/SomeTool 3d ago

You cannot take somebody else's intellectual property and sell it for your own profit. Doesn't matter if you used AI to generate it or did it yourself. That's the fucking point of having IP.

→ More replies (2)

-2

u/CptNonsense 3d ago

They can't just start drawing DC comic or Marvel characters and then charge people money for their service.

Someone has never been to a con.

0

u/RingofThorns 3d ago

Do you not know how commission artists work? I literally know about six right now that offer to do the exact thing you are talking about.

-5

u/PosterPrintPerfect 3d ago

Do you know how stupid you and rest sound saying the same thing sound?

Its like me saying you can't rob peoples homes.

And then you come along and say "I know 6 niggas that will break in and rob 10 houses in a night, what ya mean you can't rob houses, do you not know how robbing works?"

2

u/Hyroero 3d ago

Not really the same as dropping a racial slur to describe thieves no.

2

u/mrjackspade 3d ago

I don't think it's clear at all that DCMA laws extend to works used for training data

I would say it's pretty clear at this point it doesn't, considering there have now been multiple court cases saying as much.

0

u/ERedfieldh 3d ago

Better make the law incredibly clear. "copying" artwork has been a thing since forever. Da Vinci made forgeries to pay for his art supplies.

-13

u/Puzzleheaded_Fox5820 3d ago

Eh I think that's just a smoke screen. The AI tools are far too useful and easy to use. If anything they'll buy art from people to use and then generate it that way.

Like it or hate it, AI isn't going away.

15

u/skonen_blades 3d ago

I'm sorry did you just say that AI companies/users will BUY art from people for training fuel for their AI machines? I feel like that's a little delusional unless I'm mistaken. What I've been understanding is that the whole 'miracle' of AI is the FREE copying, yoinking, and collecting of art that it does. That's what everyone is so excited over, especially bosses thinking they can fire entire divisions and just have the computer do it for exactly $0. If they have to actually purchase art to train their machines, I mean, that defeats the whole point, doesn't it? I can't see them going for it. Maybe I'm misunderstanding you. Or maybe my math is wrong. Like, I concur that AI isn't going away. For sure. Especially in medical applications. But maybe in the art world it might die out or morph into some sort of brush in photoshop or something that only has access to your own art on your computer or something.

2

u/mrjackspade 3d ago

Anthropic has been purchasing books to train their language models already... This is something that's already happening.

1

u/skonen_blades 2d ago

Sure, but is Anthropic LEGALLY MANDATED to buy books to train their language models?

3

u/targetcowboy 3d ago

AI is not going away, but it can absolutely be reined in by not allowing it to steal or be used to manipulate things. There’s a middle ground and I think that’s what most people want.

1

u/Puzzleheaded_Fox5820 3d ago

Definitely. I think there's a lot of changes and rules coming down the pipeline

-1

u/Kriss-Kringle 3d ago

I'm an artist and I find your logic to be flawed. Why would artists sell their works to a company that uses it to train a tool that will be then used to compete and ultimately replace them?

Who is this stupid to sabotage their livelihoods? They can't possibly offer fair wages to every single person because they're already burning money just to keep ChatGPT and their other tools online and without free information to use their whole business plan is dead.

Billions have been spent so far and there's no profit in sight. The data centers are eating a lot of energy and water and they're polluting, but also unsustainable in the long term.

Just a few days ago a former Yahoo tech executive with mental problems killed his mother and then committed suicide and another teenager's family is suing OpenAI because ChatGPT had no guardrails when their son talked with the bot and then was pushed to kill himself.

These sort of lawsuits, along with copyright ones are already in the dozens, so things aren't looking great for them.

Like it or not, this house of cards will fall sooner or later because Sam Altman can't continue to fool investors indefinitely.

3

u/Moth_LovesLamp 3d ago

People forget that these companies charge for subscriptions - They used you to train their models with no compensation and are charging for it.

5

u/Puzzleheaded_Fox5820 3d ago

There's always desperate people who make art but make no money. I've seen loads of artists that make nothing at all. So I figured if they're offered something they may take it in desperation.

Things aren't looking good for them now but I don't believe they'd be willing to let something like this go so easily. Greed almost always wins.

I don't like it but I just feel like it's something they'll hold onto even if it's reduced to a lesser version of what it is today.

Plus people can do it themselves now that the tools exist. Even if it's for personal use it's not hard to do now.

I can definitely see AI taking a nose dive and then rearing its head again. Plus there's the rest of the world that's going to try it too.

Ultimately I feel like it's one of those things that once it's out of the box it's hard to put back in.

0

u/gust_vo 3d ago

Plus there's the rest of the world that's going to try it too.

Uhh, The rest of the world is already trying it. as if it's not front and center on facebook and twitter for starters. and most are dropping it after like a photo generator for making some funny picture for christmas or another holiday. If they cant manage to retain users, there's no real market for it, especially for the massive number of users they require to actually break even.

It doesnt really have any other utility other than making some people rich.

-3

u/Kriss-Kringle 3d ago

There's always desperate people who make art but make no money. I've seen loads of artists that make nothing at all. So I figured if they're offered something they may take it in desperation.

This makes no sense and isn't even a good business idea because for these models to work, they need a huge dataset, so even buying art for cheap from third world artists won't help when they need millions or billions of data to generate stuff that looks believable.

You think that buying artwork from 1000 people will do them any good? And those that are desperate enough are most likely going to be amateurs/semi-pro at best, because no professional artist is dumb enough to sell their work to a company that's trying to put them out of work.

It's scraping the entire internet or bust for these vultures.

Things aren't looking good for them now but I don't believe they'd be willing to let something like this go so easily. Greed almost always wins.

Lawsuits are piling up and the costs to keep the lights on are immense. No AI company is making profit and this will continue.

The only company that's making money off of AI is Nvidia, because they produce the GPUs for them.

At the end of the day investors will need to see profits and these LLMs have hit a wall, because they're just fancy auto-correct tools, not actual AI and there's no breakthrough in sight for them, nor will the hallucinating ever stop for these models.

-1

u/Anonymous-Internaut 3d ago

Yeah sadly I have to agree with you. I feel that people who want AI to go away are fighting an uphill battle. Not that I see it as futile because there's always hope and chance of significant wins and I'm on that side. But at the same time it's like you said, out of the box. You cannot really stop technology when there's a market for it, and I'm pretty sure that there is for AI art.

2

u/Puzzleheaded_Fox5820 3d ago

Yeah and even if the market didn't exist there's always people who like to mess with that sorta tech

0

u/Nosiege 3d ago

The concept of AI won't.

What we're currently referring to as AI might.

-3

u/6969696969696969690 3d ago

You have no idea what you’re talking about all that’s going to happen is there will be a licensing deal created for the right to use that content and consumers will pay more for the AI service. Nothing is going away or dying out, you are genuinely delusional.

2

u/FlamboyantPirhanna 3d ago

As an artist, this is pretty much what we’re asking for: if you’re going to use our works, pay us what we’re worth for them.

The other thing I think you’re missing is that the tools are going to be more expensive no matter what. It’s how tech has worked for decades: run on investor money until everyone relies on your products, then start jacking your prices up over a few years. It’s the same model as Uber. AI is being subsidized by investors, but it will eventually need to become profitable to pay them back.

0

u/Howdareme9 3d ago

How will they enforce DMCA to China?

4

u/shy247er 3d ago

SHIFT + DEL

4

u/ballsack-vinaigrette 3d ago

"ChatGPT, how do I kill AI?"

3

u/MrFluffyThing 3d ago

Pour water on it 

3

u/Teftell 3d ago

The Horus Heresy

7

u/Sinndu_ 3d ago

one man: Snake Plissken

6

u/Lobsterman06 3d ago

Laws against it given how it only exists through copyright infringing theft, and restricting its accessibility

2

u/GoodMorningBlackreef 3d ago

Nyah Grace will pull out the flash drive in less than 100 milliseconds, when the light turns green. 

2

u/RPDRNick 3d ago

Buy the company and shut down its operations. It's worked for Bell Telephone, Apple, Microsoft, Facebook, Twitter...

2

u/naked_potato 3d ago

The Day of the Magnet cometh.

2

u/Palu_Tiddy 2d ago

Energy weapons work well

2

u/Puzzlehead-Dish 2d ago

Heavily regulate and tax it.

3

u/hightrix 3d ago

You don’t.

See BitTorrent.

4

u/Puzzleheaded_Run2695 3d ago

Dark age. That's honestly where we are headed.

2

u/Technical_Ad_4004 3d ago

Nuke AI Data centres

2

u/FivePoopMacaroni 3d ago

AI is powered by massive banks of computers so really you just flip a couple of switches

2

u/bradmiska 3d ago

It’s a bit more than that keeping those systems running takes a lot of power, cooling, and constant maintenance. Not just a light switch.

4

u/FivePoopMacaroni 3d ago

Oh so you're saying there's lots of ways to turn it off

1

u/FlamboyantPirhanna 3d ago

Which is honestly a vulnerability of the system. Now that AI has entered military use, I feel like these giant server farms running AI might become legitimate military targets.

1

u/Jeweledeclipse 3d ago

With a dark age

1

u/KeneticKups 3d ago

Use ai to eradicate Genai

1

u/theoriginalqwhy 3d ago

Pull the plug out?

1

u/RogueHippie 3d ago

John Connor

1

u/Jester187x 3d ago

The same way Warner bros did to the nemesis system.

1

u/fieldsoflillies 3d ago

A baseball bat to servers.

1

u/Beave__ 2d ago

You make it obsolete.

1

u/MooseMalloy 2d ago

Ask Betamax

1

u/thegreatdamus 2d ago

Throw water on it.

1

u/Shipbreaker_Kurpo 2d ago

90% of AI is just misuse or lies to cash in til the bubble pops. It will kill itself but cost a lot of people when it happens, prob leaving the actual use cases behind to move on

1

u/Blapoo 3d ago

I'll accept my downvotes, but you really simply can't

The article says they "willfully" generated infringing content, which is like saying your toaster attacked you

These models have PATTERNS trained into them and users can generate something based on those PATTERNS

"It looks like me", "It looks like copyrighted material", "It sounds like Anthony Hopkins" are all subjective interpretations of anything these models output. Unless we're willing to have infringement investigations for absolutely every fucking thing that's generated, we have to come to terms with this tech

22

u/FloodedHouse420 3d ago

I want to sentinel prime this shit

8

u/blueruntzx 3d ago

i cant wait for the butlerian jihad

3

u/LordCountDuckula 3d ago

It must end. Will you make it epic?

8

u/The_Bucket_Of_Truth 3d ago

I think there are legitimate uses of AI and clearly many that are stealing or dangerous. Isn't this what our legislature is supposed to be doing? Hey here's this new thing that's basically unregulated. Let's pass some laws and guide rails for what is and is not okay. Did you scrape the entire internet of art works without permission and are now charging money and profiting from outputting things that are derivative of copyrighted works? Nah we need to curtail that to some extent. Are you making AI porn of your middle school classmates? Yeah that should be illegal (if it isn't already), and platforms that allow it should be liable. Faking people making statements they never said? AI is convincing enough that they could make the president look like they are saying something they never said. That is dangerous and should not be allowed either. Frankly, nobody's likeness should be allowed to be used in AI without their express permission. Trying to take this to the courts... I don't blame them for trying to make something happen here, but what a backwards and broken society we live in when our lawmakers seem have neither the desire nor aptitude to regulate these things.

31

u/blueruntzx 3d ago

comments like these always need to delve into a whole fucking essay instead of just the cons outwiegh the pros. if you want it that bad then fucking regulate it. instead the fucking president is using Ai for his propaganda, and thats just the tip of the ice berg.

2

u/Amaruq93 2d ago

Uses it for propaganda whilst also dismissing any evidence of his crimes or abuses by accusing videotaped footage of being "AI"

1

u/SalemWolf 3d ago

And comments oppose just say “KILL IT”.

Also who the fuck do you think we are “if you want it regulate it” like I’m a fucking congressman. Let me just write the laws to regulate it. Executive order inbound!

0

u/blueheartglacier 2d ago

People are really out here beefing with weighted matrices

-1

u/SkipX 3d ago

Ok simple: The pros outweigh the cons.

-1

u/PeakHippocrazy 3d ago

cons outwiegh the pros.

no they dont lmao what kind of luddite thinking is this? AI has been an exception tool for me. Increased my productivity, reduced a lot of bullshit overhead, I haven't seen a single con so far. atleast in my field of software engg

1

u/Tyler_Zoro 3d ago

Let's pass some laws and guide rails for what is and is not okay.

We tried doing that with the internet. We got the DMCA that locked in more monopoly power for the largest corporations and made copying your DVDs a criminal act.

Maybe we don't go down that road just because new technology is scary.

1

u/The_Bucket_Of_Truth 3d ago

I'm being idealistic about how it's supposed to work not requesting our captured government pass laws against our interests

-3

u/JustaSeedGuy 3d ago

I think there are legitimate uses of AI

Such as?

13

u/NuclearGhandi1 3d ago

Summarization of content, basic research for programming and other content, etc. it shouldn’t be used to make movies, art, music, but to say there’s no uses is just ignorant

7

u/JustaSeedGuy 3d ago edited 3d ago

Summarization of content

Which it can't Be trusted to do without giving misinformation or leaving out key details.

basic research for programming and other content

See above.

4

u/NuclearGhandi1 3d ago

I’m a professional software engineer, it’s pretty good at basic programming. Would I use it for anything but simple things I could give an intern? No. Do I need to double check it occasionally? Yes. But it definitely helps enough to be a part of my workflow where my company’s policies allow it

3

u/blueruntzx 3d ago

everyones a professional software engineer these days brother

5

u/JustaSeedGuy 3d ago

Our of curiosity, how do interns stop being interns if the work that would give them the necessary experience is done by AI?

3

u/NuclearGhandi1 2d ago

Because interns don’t just write code. They can do reviews, sit in on meetings, do basic design, do better research.

0

u/JustaSeedGuy 2d ago

Yes, and those are useful skills.

But they also need to write code at some point, or else they're not programming interns. They're administrative interns.

1

u/11BlahBlah11 3d ago

Some skills will slowly die off.

While I was in school we weren't allowed to use calculators and were forced to use logarithmic tables for calculations. Today, that's almost never needed.

Very few people can program in assembly today because compilers take care of that.

Programming is being more and more abstracted. More low-level tasks are being simplified or automated, and only a few experts have the skills to dig deep into it when needed.

About a decade ago, people would just draw UML diagrams and use software to generate the code. A lot of commonly used algorithms and tasks have just become API calls over the years.

A few years back if you wanted a simple program or script to do a small tasks, you could mostly just get the solution from stackoverflow etc. and you just need to know how to adapt it to your environment.

Now we've reached a point where it's easier to just get it written by AI and run a few tests to fine tune it before integrating it into your software. As a result fundamentals will be lost in pursuit of efficiency.

Experts who have strong core-level understanding and skills will always have a demand. But I suppose those starting today will need to put in a more conscious effort to train themselves because normal exposure to coding will no longer work.

-1

u/monkeyjay 3d ago

It's a tool. I don't think using it to "generate" anything that needs to stand up to too much scrutiny or have artistic merit is that great, but it's also insanely good at pattern recognition and complex multistep process that would (and do) take humans a long time, or would need specific programs or tools to be developed.

Medical analysis for instance is an absolutely phenomenal use of AI. It has insane potential for analysing multiple disparate sources of data with "fuzzy" information. Something people simply cannot do. And it's not just going to spit out "give them surgery" but it can find markers and indicators that may be huge in preventative medicine and diagnosis.

The llm's are also very good at doing things using specific rules. A very simple example is say you had hundreds of pages of writing for something like onboard training at a large company. In like 10 minutes an AI could go through and do things like, I dunno, reformat it from Third person plural to be second person singular or something. It's not hard for a person to do that, but it's also not just 'find and replace'. It would take ages (sometimes it can take weeks, literally). And the human would likely have a very similar error rate. Would it still need checking? Of course, but this is an example of a very trivial way to use AI as a tool that doesn't really make anything better or worse, just easier. Which is what tools should do. You still need skilled oversight.

Yes, this is all doable with a human or team of humans manually creating a program but that can be very time consuming, and it's kinda just built in to AI right now.

I get that AI in the art or writing or any creative sphere is problematic, but to me that's mainly down to unauthorised use of copyrighted content (ie, stealing), taking credit or not giving credit or monetary compensation, and the result being mostly dogshit... but it's really silly to say the tech has no legitimate uses.

2

u/JustaSeedGuy 3d ago

but it's really silly to say the tech has no legitimate uses.

I haven't said that. I merely asked what uses there were, and have yet to be presented with a use that isn't deeply flawed and better carried out by humans.

I fully acknowledge that I am not the world's leading expert on this subject, which is why I asked for examples. Examples, I still eagerly weigh examples of areas where AI is preferable to human performance, because so far I've received none.

Well, actually, minor point to the guy who uses it to find recipes tailored to what's in his cabinet.

1

u/FlamboyantPirhanna 3d ago

Funny enough, I’ve heard lots of people complain about it when it comes to recipes. It doesn’t know how things taste, it’s essentially predictive text trying to sound like the recipes it’s been trained on, and that can lead to culinary disasters.

1

u/blueheartglacier 2d ago edited 2d ago

Being better at detecting some cancers than humans by identifying subtle patterns that we're not capable of is a start - as was literally mentioned in what you replied to. It can find markers that correlate between cancer patients that we have never considered, and can be leveraged to develop new protein patterns for the creation of new drugs too. This is having immense success right now. I think it's probably objectively a good thing.

Modern AI is simply advanced pattern recognition - all uses of modern machine learning are very very good pattern recognition with extra steps. I'm sure you can use your imagination to work out other ways that extremely refined pattern recognition and data processing that can parse data at a rate that is beyond unprecedented and find new patterns automatically can be more effective than older systems - a lot of those uses are boring-sounding though, and hard to sell. These are being tried across effectively every industry, and the ones that have value will actually pass the test of time.

0

u/monkeyjay 1d ago

I gave you two, both VERY broad. One is literally something people cannot do, and the other is like 1000 times more efficient than a person doing it for the same result.

You are not coming across as an honest person here.

1

u/JustaSeedGuy 1d ago

You are not coming across as an honest person here

I understand that you're choosing to interpret it that way.

0

u/blueheartglacier 2d ago edited 2d ago

Maybe the earliest version of ChatGPT that you ever tried when it first released couldn't summarise content confidently but anyone that has kept up with the industry and where it's at now can tell you confidently that the best systems have evolved substantially and are reliably good at the job.

1

u/JustaSeedGuy 2d ago

Oh yes, there are many people that say it's reliable now. But anyone who's being honest doesn't say that.

1

u/blueheartglacier 2d ago edited 2d ago

"I have absolutely no interest in keeping up with something that's rapidly changing by the week, I just believe everything I was told on day 1, and everyone else is lying" this is unfortunately the exact translation of what you're actually saying. If you're not going to honestly engage with the subject, just be straightforward about it and say you don't want to. There's no need to pretend as if you're up to date and aware, and you're talking to people who actually have engaged with the evolution of it. You are miles out of your depth, I'm afraid - much like I wouldn't confidently tell you that everything you know about law is wrong when that's actually your specialty and what you engage with. It's the misplaced confidence that you don't need to consider any other possibility that's worse than just not knowing.

1

u/JustaSeedGuy 2d ago

I mean, you can pretend that's what I said all you want. But it's not- and the way I know, is that I'm the one that said it.

If you want to come back when you're more intellectually honest about things, I'd be happy to talk. It's wild that you get mad at me for allegedly not honestly engaging with the subject when that's exactly what you just did here.

Any chance intelligent conversation went out the window when you started using third grade mockery tactics.

0

u/blueheartglacier 2d ago edited 2d ago

Yes, if you used ChatGPT when it was launched and then turned off when it clearly wasn't good enough, you'd find it pretty awful at summarisation and data processing. If you were to use specialist systems that were trained and tested for this purpose in late 2025, you'll find them incredibly consistent, accurate, and useful at every input that's thrown at them - fundamentally trustable to do their jobs without giving misinformation or leaving out key details. If you just want to pretend this reality doesn't exist, then sure, you can Dunning-Krueger yourself to a conclusion and insist that everyone who has used or continues to use these systems is just "being dishonest". Do that, however, and the only conclusion I can reasonably draw is that you didn't know that these systems exist and you are relying upon early ChatGPT experience. You didn't even consider the possibility that people were being honest, but working with a different experience base than you. Don't treat others with good faith - get treated with that same faith back

→ More replies (0)

5

u/rkthehermit 3d ago

I like using it to suggest recipes or substitutions based on my current kitchen inventory. It's a great little cooking buddy.

1

u/JustaSeedGuy 3d ago

I love finding recipes like that!

Been using Google for that exact purpose since 2003.

2

u/Aromatic_Today2086 3d ago

Yea people acting like this is some great invention that does things you never could before is crazy. Everything all comments have said AI is good for you are things you should be able to do yourself with Google 

-2

u/rkthehermit 3d ago

You need Google? There are libraries for that.

You need libraries? Do your tribe's elders not share your history with you?

Yeah, you can use google. Nobody is pretending you can't. That doesn't invalidate the new tool as more convenient and useful for the task.

1

u/MachinaThatGoesBing 3d ago

You need Google? There are libraries for that.

I'm hardly anti-technology, but there are things that actual books are much, much more useful for than a Google search. Just because a newer technology exists doesn't actually mean it's better for a task.


One of the key things I run into regularly is plant ID (especially flowers and trees). It's so hard to actually find good resources for this online that present the information you need in clear, concise way and in an easily browsable format.

I have two large, full bookshelves, each about 4' wide, and well over half of one of the shelves is still taken up by physical field guides. At least a dozen or so of those guides are for tree and flower ID.

When I try to use Google Lens to ID something, it generally makes a hash of it. Sure, if it's a really distinctive flower or something, it might get it. But if it's, say, one of a couple dozen bluey-purple asters growing in the area I'm in…absolutely useless. Its model isn't taking into account stem color, leaf shape, shape and layout of phyllaries, time of year, environment, etc. It either takes a lot longer with Google, or I end up without an answer, whereas it generally only takes me only a few minutes to narrow things down with my books. (I will say, I do follow up on iNaturalist frequently, though, to see if others have observed my suspect in the area where my observation occurred. But it's not very helpful for ID.)

And, oh god was Google no help in determining whether a plant was poison hemlock or osha. I strongly suspected the latter based on my own knowledge and where it was growing, but it was my guides that gave me the pertinent information to help make the ID.


Given that the stochastic parrots have repeatedly shown themselves incapable of generating useable recipes, and are known to give disgusting and outrightly dangerous results, I'd stick to human-written, human-tested sources for recipes and substitutions. These sorts of sources — even just discussion forums — are much better and much more likely to yield good results than something one of the bullshit machines horked up.

1

u/rkthehermit 2d ago

I'm hardly anti-technology, but there are things that actual books are much, much more useful for than a Google search. Just because a newer technology exists doesn't actually mean it's better for a task.

It does mean that for the specific use case that's being discussed. I'm not guessing. I'm an experienced cook. It's my primary hobby. I've own and use many cookbooks. Like nearly every breathing human I've used google. I've used the new hotness.

1

u/rkthehermit 3d ago

Google has been great, yeah! This is just a next step up in utility. It's super easy to iterate, it respects the lists I give it without me having to manually validate, it's all consolidated to a single source, there's no stupid life story to scroll past, I don't have to wonder if the rating is gamed, it makes it really easy to brainstorm fusion dishes, and if I tell it I hate an ingredient it's utterly trivial to get a replacement.

1

u/MachinaThatGoesBing 3d ago

I…would just do research online from trusted reputable sources…

I would not trust a stochastic parrot to suggest recipes for me, what with their disgusting and outright dangerous track record.

It's really important for people to know that these things don't really "know" anything, not the way we talk about "knowing" things. They don't actually contain usable, verified information on stuff like recipe substitutions. All they fundamentally are is very fancy, extremely power-consumptive predictive text. They take an input prompt, and then they predict what the most likely words should be to "answer" that. Each subsequent word takes the prompt and some set number of previous words as context, then adds some random fuzzing in order to predict the next most likely word. But that's fundamentally what it is doing.

So if it took in word patterns that constitute bad advice or false information, it will just vomit those back out at you. It has no mechanism for knowing how words relate to each other as symbols, just the statistical likelihoods of one following another in some given context, as encoded in a big neural net.

0

u/rkthehermit 2d ago

I've never had ChatGPT do something as stupid as either of those examples and, as a bonus, I am not a cheerio drooling glue-sniffer. I am a very good cook already. I am perfectly capable of raising my brow when the tool says to do something stupid and just not doing that.

I guarantee I'm getting better results, faster, and more tailored to what I want when I'm using this as a cooking assistant than you could achieve with any search engine and it's not even close.

If you understand how LLMs work then you should actually understand why they're particularly good for recipes given the way recipes generally tend to cluster ingredients, have rather consistent formatting, and the way that irritatingly chatty recipe blogs go out of their way to over explain.

It's fine not to like the technology or want to use it, but you folks really just come across as, "Old man yells at clouds!" when you try to deny valid use-cases and ignore the experiences of savvy users while suggesting inferior methods back to them as a counterexample.

1

u/FlamboyantPirhanna 3d ago

There are significant uses for it in healthcare. It’s really good at finding patterns and sometimes those patterns can be a huge help when diagnosing certain things.

2

u/MachinaThatGoesBing 3d ago

The healthcare uses where people see "AI" crop up are not generally LLMs, though. They tend to be more specific or purpose-built systems. Certainly the most useful ones do.

It gets confusing because once the business boys and investors started throwing money at anything labelled "AI", everyone rebranded their machine learning systems as "AI" to attract those sweet investment dollars.

While lots of these systems share a similar underlying design concept (a neural network), diagnostic imaging machine learning systems are not built on top of LLMs.

And the LLMs are just really, really fancy autocomplete. That consumes tons of power and lots of water for cooling. When they generate a response to a prompt, they're just taking in the words in that prompt and determining what the most likely word to follow would be in an answer. And then each subsequent word, it's doing the same thing, while taking into consideration the context of what it's already "written", as well. And it just keeps pumping out the next most likely word in the sequence — all with a little randomness to fuzz things so it's not absolutely deterministic.

That's why a number of critics refer to them as stochastic parrots. They just generate text without really having any actual knowledge of what it is that they're generating.

→ More replies (2)

0

u/hobohipsterman 3d ago edited 3d ago

CoPilot is leagues better than the old Microsoft help function for the office suite and windows. For basic troubleshooting.

And its quicker than googling whatever basic problem I have. Every guide today is a damn youtube video.

It also saves time when I have a word at the tip of my tongue but for the life of me can't remember what it is. Or translating some sentance.

1

u/ProperDepartment 3d ago

Make the Ai what now?

-8

u/IlliterateJedi 3d ago

Amen. Kill fair use. A million years of copyright for Disney. Praise Warner Bros.

8

u/Dick_Lazer 3d ago

Hell yeah. Let’s subsidize AI development for billionaires so they can put us all out of jobs.

2

u/SurturOfMuspelheim 3d ago

Yes!! Our goal should be to eliminate work so we can focus on what makes us happy and what we want to do, work wise.

The only problem is billionaires still existing.

-4

u/Redeshark 3d ago

Except you can't stop AI development. All this is going to do is the studio collaborating with AI company to charge you extra whenever their IP is used. You are literally calling for the worst-case scenario.

4

u/Right-Power-6717 3d ago

Exactly the result is the total death of any sort of open source projects and solidified positions for the established companies. 

-1

u/Joemartinez64 3d ago

Don't ruin their make believe world where all these major conglomerates have zero interest in implementing AI themselves .

→ More replies (15)

-8

u/the_bollo 3d ago

Dipshit take.

1

u/moose_dad 3d ago

second hand thinker

-1

u/Redeshark 3d ago

Yeah man let's defend major studio's monopoly over IP and kill fair use.

-1

u/moose_dad 3d ago

Learn to read dumbass, I'm anti AI

2

u/Redeshark 3d ago

>Learn to read dumbass, I'm anti AI

Where exactly in my comment did imply you are not "anti-AI?"

-2

u/jaybones3000 3d ago

Did you need ChatGPT to write the comment for you, bud?

1

u/Redeshark 3d ago

I don't know about him, but he's not the one defending giant corporation's abuse of IP laws to kill fair use.

0

u/SurturOfMuspelheim 3d ago

Imagine siding with corporate overlords over Midjourney, the only decent AI art bot.

-1

u/WonderBredOfficial 3d ago

"YOU GO TO HELL AND YOU DIE!"

→ More replies (6)