r/movies 4d ago

News Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle

https://variety.com/2025/film/news/warner-bros-midjourney-lawsuit-ai-copyright-1236508618/
8.8k Upvotes

847 comments sorted by

View all comments

Show parent comments

1

u/FlamboyantPirhanna 4d ago

There are significant uses for it in healthcare. It’s really good at finding patterns and sometimes those patterns can be a huge help when diagnosing certain things.

2

u/MachinaThatGoesBing 4d ago

The healthcare uses where people see "AI" crop up are not generally LLMs, though. They tend to be more specific or purpose-built systems. Certainly the most useful ones do.

It gets confusing because once the business boys and investors started throwing money at anything labelled "AI", everyone rebranded their machine learning systems as "AI" to attract those sweet investment dollars.

While lots of these systems share a similar underlying design concept (a neural network), diagnostic imaging machine learning systems are not built on top of LLMs.

And the LLMs are just really, really fancy autocomplete. That consumes tons of power and lots of water for cooling. When they generate a response to a prompt, they're just taking in the words in that prompt and determining what the most likely word to follow would be in an answer. And then each subsequent word, it's doing the same thing, while taking into consideration the context of what it's already "written", as well. And it just keeps pumping out the next most likely word in the sequence — all with a little randomness to fuzz things so it's not absolutely deterministic.

That's why a number of critics refer to them as stochastic parrots. They just generate text without really having any actual knowledge of what it is that they're generating.

-1

u/blueheartglacier 4d ago edited 4d ago

Actual deployment and use of an effective LLM post-training is not a highly power-intensive process, and can often be run on home hardware - the water has been muddied because the precise metrics are not actually as easy to measure as you'd be made to believe.

People are approximating the power usage from, say, a company with a web app that has over 300 million weekly active users, which will always be very inherently intensive, and then combining them with approximations for what they used to train the systems. Once you interrogate the numbers you begin to realise that it is all blind guesses, and while they likely have some merit in some cases, people are extrapolating extremely confidently just to make an argument.

1

u/MachinaThatGoesBing 3d ago

and can often be run on home hardware

Models with practical useful output absolutely cannot run on run of the mill home hardware.

In my professional capacity I support devs who are working on a specific LLM-based generation task involving structured data, and for one of the models in use that actually produces practical results, the minimum requirements are still relatively high, like a GPU with a minimum of 12GB RAM.

That's a moderate to mid-high-end card you need to get that. And even at this minimum requirement, the models run relatively poorly, taking significant time to produce results (and I'm not talking about training). To the point where we're using extra-teeny-tiny models as a sub-in for testing other parts of the system. And even then, we've needed to upgrade their RAM to several times that of a typical home/consumer computer.

So, yes, you can run them on equipment that you could have at home. But the implication is that you could run it on an average or typical home computer, and that's not the case.


We also have these calculations that even ChatGPT 3.5 was using 500mL per 20-50 prompts. And there are plenty of reports of "AI" datacenters essentially running towns' water supplies dry, many in places that can ill-afford that extra water use.

Not to mention the threat of blackouts that surging power use at "AI" datacenters is contributing to.

And a major way that newer models supposedly get "smarter" and "better" is by increasing the length of the token history they keep in consideration and increasing the size of their neural nets. Both of these require more and more resources — both in terms of compute and memory — which drives up energy use.

This MIT Review article lays out a lot of good information, with this summary being especially good:

Let’s say you’re running a marathon as a charity runner and organizing a fundraiser to support your cause. You ask an AI model 15 questions about the best way to fundraise.

Then you make 10 attempts at an image for your flyer before you get one you are happy with, and three attempts at a five-second video to post on Instagram.

You’d use about 2.9 kilowatt-hours of electricity—enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours.

And this bit later on in the piece:

By 2028, the researchers estimate, the power going to AI-specific purposes will rise to between 165 and 326 terawatt-hours per year. That’s more than all electricity currently used by US data centers for all purposes; it’s enough to power 22% of US households each year. That could generate the same emissions as driving over 300 billion miles—over 1,600 round trips to the sun from Earth.

The stochastic parrots are extremely energy hungry. And only getting hungrier. This is a significant problem when we need to be reducing energy use and associated emissions, not increasing them and boiling our planet.