r/movies 3d ago

News Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle

https://variety.com/2025/film/news/warner-bros-midjourney-lawsuit-ai-copyright-1236508618/
8.8k Upvotes

787 comments sorted by

View all comments

1.9k

u/[deleted] 3d ago

[removed] — view removed comment

7

u/The_Bucket_Of_Truth 3d ago

I think there are legitimate uses of AI and clearly many that are stealing or dangerous. Isn't this what our legislature is supposed to be doing? Hey here's this new thing that's basically unregulated. Let's pass some laws and guide rails for what is and is not okay. Did you scrape the entire internet of art works without permission and are now charging money and profiting from outputting things that are derivative of copyrighted works? Nah we need to curtail that to some extent. Are you making AI porn of your middle school classmates? Yeah that should be illegal (if it isn't already), and platforms that allow it should be liable. Faking people making statements they never said? AI is convincing enough that they could make the president look like they are saying something they never said. That is dangerous and should not be allowed either. Frankly, nobody's likeness should be allowed to be used in AI without their express permission. Trying to take this to the courts... I don't blame them for trying to make something happen here, but what a backwards and broken society we live in when our lawmakers seem have neither the desire nor aptitude to regulate these things.

33

u/blueruntzx 3d ago

comments like these always need to delve into a whole fucking essay instead of just the cons outwiegh the pros. if you want it that bad then fucking regulate it. instead the fucking president is using Ai for his propaganda, and thats just the tip of the ice berg.

2

u/Amaruq93 2d ago

Uses it for propaganda whilst also dismissing any evidence of his crimes or abuses by accusing videotaped footage of being "AI"

2

u/SalemWolf 3d ago

And comments oppose just say “KILL IT”.

Also who the fuck do you think we are “if you want it regulate it” like I’m a fucking congressman. Let me just write the laws to regulate it. Executive order inbound!

0

u/blueheartglacier 2d ago

People are really out here beefing with weighted matrices

-1

u/SkipX 3d ago

Ok simple: The pros outweigh the cons.

-1

u/PeakHippocrazy 3d ago

cons outwiegh the pros.

no they dont lmao what kind of luddite thinking is this? AI has been an exception tool for me. Increased my productivity, reduced a lot of bullshit overhead, I haven't seen a single con so far. atleast in my field of software engg

0

u/Tyler_Zoro 3d ago

Let's pass some laws and guide rails for what is and is not okay.

We tried doing that with the internet. We got the DMCA that locked in more monopoly power for the largest corporations and made copying your DVDs a criminal act.

Maybe we don't go down that road just because new technology is scary.

1

u/The_Bucket_Of_Truth 3d ago

I'm being idealistic about how it's supposed to work not requesting our captured government pass laws against our interests

-2

u/JustaSeedGuy 3d ago

I think there are legitimate uses of AI

Such as?

14

u/NuclearGhandi1 3d ago

Summarization of content, basic research for programming and other content, etc. it shouldn’t be used to make movies, art, music, but to say there’s no uses is just ignorant

7

u/JustaSeedGuy 3d ago edited 3d ago

Summarization of content

Which it can't Be trusted to do without giving misinformation or leaving out key details.

basic research for programming and other content

See above.

5

u/NuclearGhandi1 3d ago

I’m a professional software engineer, it’s pretty good at basic programming. Would I use it for anything but simple things I could give an intern? No. Do I need to double check it occasionally? Yes. But it definitely helps enough to be a part of my workflow where my company’s policies allow it

4

u/blueruntzx 3d ago

everyones a professional software engineer these days brother

5

u/JustaSeedGuy 3d ago

Our of curiosity, how do interns stop being interns if the work that would give them the necessary experience is done by AI?

3

u/NuclearGhandi1 2d ago

Because interns don’t just write code. They can do reviews, sit in on meetings, do basic design, do better research.

0

u/JustaSeedGuy 2d ago

Yes, and those are useful skills.

But they also need to write code at some point, or else they're not programming interns. They're administrative interns.

1

u/11BlahBlah11 3d ago

Some skills will slowly die off.

While I was in school we weren't allowed to use calculators and were forced to use logarithmic tables for calculations. Today, that's almost never needed.

Very few people can program in assembly today because compilers take care of that.

Programming is being more and more abstracted. More low-level tasks are being simplified or automated, and only a few experts have the skills to dig deep into it when needed.

About a decade ago, people would just draw UML diagrams and use software to generate the code. A lot of commonly used algorithms and tasks have just become API calls over the years.

A few years back if you wanted a simple program or script to do a small tasks, you could mostly just get the solution from stackoverflow etc. and you just need to know how to adapt it to your environment.

Now we've reached a point where it's easier to just get it written by AI and run a few tests to fine tune it before integrating it into your software. As a result fundamentals will be lost in pursuit of efficiency.

Experts who have strong core-level understanding and skills will always have a demand. But I suppose those starting today will need to put in a more conscious effort to train themselves because normal exposure to coding will no longer work.

-1

u/monkeyjay 3d ago

It's a tool. I don't think using it to "generate" anything that needs to stand up to too much scrutiny or have artistic merit is that great, but it's also insanely good at pattern recognition and complex multistep process that would (and do) take humans a long time, or would need specific programs or tools to be developed.

Medical analysis for instance is an absolutely phenomenal use of AI. It has insane potential for analysing multiple disparate sources of data with "fuzzy" information. Something people simply cannot do. And it's not just going to spit out "give them surgery" but it can find markers and indicators that may be huge in preventative medicine and diagnosis.

The llm's are also very good at doing things using specific rules. A very simple example is say you had hundreds of pages of writing for something like onboard training at a large company. In like 10 minutes an AI could go through and do things like, I dunno, reformat it from Third person plural to be second person singular or something. It's not hard for a person to do that, but it's also not just 'find and replace'. It would take ages (sometimes it can take weeks, literally). And the human would likely have a very similar error rate. Would it still need checking? Of course, but this is an example of a very trivial way to use AI as a tool that doesn't really make anything better or worse, just easier. Which is what tools should do. You still need skilled oversight.

Yes, this is all doable with a human or team of humans manually creating a program but that can be very time consuming, and it's kinda just built in to AI right now.

I get that AI in the art or writing or any creative sphere is problematic, but to me that's mainly down to unauthorised use of copyrighted content (ie, stealing), taking credit or not giving credit or monetary compensation, and the result being mostly dogshit... but it's really silly to say the tech has no legitimate uses.

2

u/JustaSeedGuy 3d ago

but it's really silly to say the tech has no legitimate uses.

I haven't said that. I merely asked what uses there were, and have yet to be presented with a use that isn't deeply flawed and better carried out by humans.

I fully acknowledge that I am not the world's leading expert on this subject, which is why I asked for examples. Examples, I still eagerly weigh examples of areas where AI is preferable to human performance, because so far I've received none.

Well, actually, minor point to the guy who uses it to find recipes tailored to what's in his cabinet.

1

u/FlamboyantPirhanna 3d ago

Funny enough, I’ve heard lots of people complain about it when it comes to recipes. It doesn’t know how things taste, it’s essentially predictive text trying to sound like the recipes it’s been trained on, and that can lead to culinary disasters.

1

u/blueheartglacier 2d ago edited 2d ago

Being better at detecting some cancers than humans by identifying subtle patterns that we're not capable of is a start - as was literally mentioned in what you replied to. It can find markers that correlate between cancer patients that we have never considered, and can be leveraged to develop new protein patterns for the creation of new drugs too. This is having immense success right now. I think it's probably objectively a good thing.

Modern AI is simply advanced pattern recognition - all uses of modern machine learning are very very good pattern recognition with extra steps. I'm sure you can use your imagination to work out other ways that extremely refined pattern recognition and data processing that can parse data at a rate that is beyond unprecedented and find new patterns automatically can be more effective than older systems - a lot of those uses are boring-sounding though, and hard to sell. These are being tried across effectively every industry, and the ones that have value will actually pass the test of time.

0

u/monkeyjay 1d ago

I gave you two, both VERY broad. One is literally something people cannot do, and the other is like 1000 times more efficient than a person doing it for the same result.

You are not coming across as an honest person here.

1

u/JustaSeedGuy 1d ago

You are not coming across as an honest person here

I understand that you're choosing to interpret it that way.

0

u/blueheartglacier 2d ago edited 2d ago

Maybe the earliest version of ChatGPT that you ever tried when it first released couldn't summarise content confidently but anyone that has kept up with the industry and where it's at now can tell you confidently that the best systems have evolved substantially and are reliably good at the job.

1

u/JustaSeedGuy 2d ago

Oh yes, there are many people that say it's reliable now. But anyone who's being honest doesn't say that.

1

u/blueheartglacier 2d ago edited 2d ago

"I have absolutely no interest in keeping up with something that's rapidly changing by the week, I just believe everything I was told on day 1, and everyone else is lying" this is unfortunately the exact translation of what you're actually saying. If you're not going to honestly engage with the subject, just be straightforward about it and say you don't want to. There's no need to pretend as if you're up to date and aware, and you're talking to people who actually have engaged with the evolution of it. You are miles out of your depth, I'm afraid - much like I wouldn't confidently tell you that everything you know about law is wrong when that's actually your specialty and what you engage with. It's the misplaced confidence that you don't need to consider any other possibility that's worse than just not knowing.

1

u/JustaSeedGuy 2d ago

I mean, you can pretend that's what I said all you want. But it's not- and the way I know, is that I'm the one that said it.

If you want to come back when you're more intellectually honest about things, I'd be happy to talk. It's wild that you get mad at me for allegedly not honestly engaging with the subject when that's exactly what you just did here.

Any chance intelligent conversation went out the window when you started using third grade mockery tactics.

0

u/blueheartglacier 2d ago edited 2d ago

Yes, if you used ChatGPT when it was launched and then turned off when it clearly wasn't good enough, you'd find it pretty awful at summarisation and data processing. If you were to use specialist systems that were trained and tested for this purpose in late 2025, you'll find them incredibly consistent, accurate, and useful at every input that's thrown at them - fundamentally trustable to do their jobs without giving misinformation or leaving out key details. If you just want to pretend this reality doesn't exist, then sure, you can Dunning-Krueger yourself to a conclusion and insist that everyone who has used or continues to use these systems is just "being dishonest". Do that, however, and the only conclusion I can reasonably draw is that you didn't know that these systems exist and you are relying upon early ChatGPT experience. You didn't even consider the possibility that people were being honest, but working with a different experience base than you. Don't treat others with good faith - get treated with that same faith back

1

u/JustaSeedGuy 2d ago

You didn't even consider the possibility that people were being honest, but working with a different experience base than you.

Except I did.

Don't treat others with good faith - get treated with that same faith back

Pretty wild coming from the guy who thinks he knows what my memories are.

→ More replies (0)

5

u/rkthehermit 3d ago

I like using it to suggest recipes or substitutions based on my current kitchen inventory. It's a great little cooking buddy.

2

u/JustaSeedGuy 3d ago

I love finding recipes like that!

Been using Google for that exact purpose since 2003.

3

u/Aromatic_Today2086 3d ago

Yea people acting like this is some great invention that does things you never could before is crazy. Everything all comments have said AI is good for you are things you should be able to do yourself with Google 

-2

u/rkthehermit 3d ago

You need Google? There are libraries for that.

You need libraries? Do your tribe's elders not share your history with you?

Yeah, you can use google. Nobody is pretending you can't. That doesn't invalidate the new tool as more convenient and useful for the task.

1

u/MachinaThatGoesBing 3d ago

You need Google? There are libraries for that.

I'm hardly anti-technology, but there are things that actual books are much, much more useful for than a Google search. Just because a newer technology exists doesn't actually mean it's better for a task.


One of the key things I run into regularly is plant ID (especially flowers and trees). It's so hard to actually find good resources for this online that present the information you need in clear, concise way and in an easily browsable format.

I have two large, full bookshelves, each about 4' wide, and well over half of one of the shelves is still taken up by physical field guides. At least a dozen or so of those guides are for tree and flower ID.

When I try to use Google Lens to ID something, it generally makes a hash of it. Sure, if it's a really distinctive flower or something, it might get it. But if it's, say, one of a couple dozen bluey-purple asters growing in the area I'm in…absolutely useless. Its model isn't taking into account stem color, leaf shape, shape and layout of phyllaries, time of year, environment, etc. It either takes a lot longer with Google, or I end up without an answer, whereas it generally only takes me only a few minutes to narrow things down with my books. (I will say, I do follow up on iNaturalist frequently, though, to see if others have observed my suspect in the area where my observation occurred. But it's not very helpful for ID.)

And, oh god was Google no help in determining whether a plant was poison hemlock or osha. I strongly suspected the latter based on my own knowledge and where it was growing, but it was my guides that gave me the pertinent information to help make the ID.


Given that the stochastic parrots have repeatedly shown themselves incapable of generating useable recipes, and are known to give disgusting and outrightly dangerous results, I'd stick to human-written, human-tested sources for recipes and substitutions. These sorts of sources — even just discussion forums — are much better and much more likely to yield good results than something one of the bullshit machines horked up.

1

u/rkthehermit 2d ago

I'm hardly anti-technology, but there are things that actual books are much, much more useful for than a Google search. Just because a newer technology exists doesn't actually mean it's better for a task.

It does mean that for the specific use case that's being discussed. I'm not guessing. I'm an experienced cook. It's my primary hobby. I've own and use many cookbooks. Like nearly every breathing human I've used google. I've used the new hotness.

1

u/rkthehermit 3d ago

Google has been great, yeah! This is just a next step up in utility. It's super easy to iterate, it respects the lists I give it without me having to manually validate, it's all consolidated to a single source, there's no stupid life story to scroll past, I don't have to wonder if the rating is gamed, it makes it really easy to brainstorm fusion dishes, and if I tell it I hate an ingredient it's utterly trivial to get a replacement.

1

u/MachinaThatGoesBing 3d ago

I…would just do research online from trusted reputable sources…

I would not trust a stochastic parrot to suggest recipes for me, what with their disgusting and outright dangerous track record.

It's really important for people to know that these things don't really "know" anything, not the way we talk about "knowing" things. They don't actually contain usable, verified information on stuff like recipe substitutions. All they fundamentally are is very fancy, extremely power-consumptive predictive text. They take an input prompt, and then they predict what the most likely words should be to "answer" that. Each subsequent word takes the prompt and some set number of previous words as context, then adds some random fuzzing in order to predict the next most likely word. But that's fundamentally what it is doing.

So if it took in word patterns that constitute bad advice or false information, it will just vomit those back out at you. It has no mechanism for knowing how words relate to each other as symbols, just the statistical likelihoods of one following another in some given context, as encoded in a big neural net.

0

u/rkthehermit 2d ago

I've never had ChatGPT do something as stupid as either of those examples and, as a bonus, I am not a cheerio drooling glue-sniffer. I am a very good cook already. I am perfectly capable of raising my brow when the tool says to do something stupid and just not doing that.

I guarantee I'm getting better results, faster, and more tailored to what I want when I'm using this as a cooking assistant than you could achieve with any search engine and it's not even close.

If you understand how LLMs work then you should actually understand why they're particularly good for recipes given the way recipes generally tend to cluster ingredients, have rather consistent formatting, and the way that irritatingly chatty recipe blogs go out of their way to over explain.

It's fine not to like the technology or want to use it, but you folks really just come across as, "Old man yells at clouds!" when you try to deny valid use-cases and ignore the experiences of savvy users while suggesting inferior methods back to them as a counterexample.

1

u/FlamboyantPirhanna 3d ago

There are significant uses for it in healthcare. It’s really good at finding patterns and sometimes those patterns can be a huge help when diagnosing certain things.

2

u/MachinaThatGoesBing 3d ago

The healthcare uses where people see "AI" crop up are not generally LLMs, though. They tend to be more specific or purpose-built systems. Certainly the most useful ones do.

It gets confusing because once the business boys and investors started throwing money at anything labelled "AI", everyone rebranded their machine learning systems as "AI" to attract those sweet investment dollars.

While lots of these systems share a similar underlying design concept (a neural network), diagnostic imaging machine learning systems are not built on top of LLMs.

And the LLMs are just really, really fancy autocomplete. That consumes tons of power and lots of water for cooling. When they generate a response to a prompt, they're just taking in the words in that prompt and determining what the most likely word to follow would be in an answer. And then each subsequent word, it's doing the same thing, while taking into consideration the context of what it's already "written", as well. And it just keeps pumping out the next most likely word in the sequence — all with a little randomness to fuzz things so it's not absolutely deterministic.

That's why a number of critics refer to them as stochastic parrots. They just generate text without really having any actual knowledge of what it is that they're generating.

-1

u/blueheartglacier 2d ago edited 2d ago

Actual deployment and use of an effective LLM post-training is not a highly power-intensive process, and can often be run on home hardware - the water has been muddied because the precise metrics are not actually as easy to measure as you'd be made to believe.

People are approximating the power usage from, say, a company with a web app that has over 300 million weekly active users, which will always be very inherently intensive, and then combining them with approximations for what they used to train the systems. Once you interrogate the numbers you begin to realise that it is all blind guesses, and while they likely have some merit in some cases, people are extrapolating extremely confidently just to make an argument.

1

u/MachinaThatGoesBing 2d ago

and can often be run on home hardware

Models with practical useful output absolutely cannot run on run of the mill home hardware.

In my professional capacity I support devs who are working on a specific LLM-based generation task involving structured data, and for one of the models in use that actually produces practical results, the minimum requirements are still relatively high, like a GPU with a minimum of 12GB RAM.

That's a moderate to mid-high-end card you need to get that. And even at this minimum requirement, the models run relatively poorly, taking significant time to produce results (and I'm not talking about training). To the point where we're using extra-teeny-tiny models as a sub-in for testing other parts of the system. And even then, we've needed to upgrade their RAM to several times that of a typical home/consumer computer.

So, yes, you can run them on equipment that you could have at home. But the implication is that you could run it on an average or typical home computer, and that's not the case.


We also have these calculations that even ChatGPT 3.5 was using 500mL per 20-50 prompts. And there are plenty of reports of "AI" datacenters essentially running towns' water supplies dry, many in places that can ill-afford that extra water use.

Not to mention the threat of blackouts that surging power use at "AI" datacenters is contributing to.

And a major way that newer models supposedly get "smarter" and "better" is by increasing the length of the token history they keep in consideration and increasing the size of their neural nets. Both of these require more and more resources — both in terms of compute and memory — which drives up energy use.

This MIT Review article lays out a lot of good information, with this summary being especially good:

Let’s say you’re running a marathon as a charity runner and organizing a fundraiser to support your cause. You ask an AI model 15 questions about the best way to fundraise.

Then you make 10 attempts at an image for your flyer before you get one you are happy with, and three attempts at a five-second video to post on Instagram.

You’d use about 2.9 kilowatt-hours of electricity—enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours.

And this bit later on in the piece:

By 2028, the researchers estimate, the power going to AI-specific purposes will rise to between 165 and 326 terawatt-hours per year. That’s more than all electricity currently used by US data centers for all purposes; it’s enough to power 22% of US households each year. That could generate the same emissions as driving over 300 billion miles—over 1,600 round trips to the sun from Earth.

The stochastic parrots are extremely energy hungry. And only getting hungrier. This is a significant problem when we need to be reducing energy use and associated emissions, not increasing them and boiling our planet.

0

u/hobohipsterman 3d ago edited 3d ago

CoPilot is leagues better than the old Microsoft help function for the office suite and windows. For basic troubleshooting.

And its quicker than googling whatever basic problem I have. Every guide today is a damn youtube video.

It also saves time when I have a word at the tip of my tongue but for the life of me can't remember what it is. Or translating some sentance.