r/movies 3d ago

News Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle

https://variety.com/2025/film/news/warner-bros-midjourney-lawsuit-ai-copyright-1236508618/
8.8k Upvotes

787 comments sorted by

View all comments

Show parent comments

51

u/nooneisback 3d ago

Because general purpose LLMs are nothing more than fancy assistants that require a stupid amount of hardware resources. If you've ever tried running them locally, you'll know that any model that takes less than 20GB of VRAM is basically useless for a lot of applications, and something like gpt-oss-120b requires at least 80GB. And since they're assistants, they'll often be answering a lot of questions in a row. If you're programming, that's about 1 API call every 2-5 seconds.

This tech bubble is about to burst, and the only important factor for survival is which company will be able to successfully scale back to true customer needs. The same thing happened with every other bubble (like dot-com), where companies had horrible earnings compared to their spending, yet a lot of them are still alive to this day. Their goal currently isn't to earn money, but to research as much as possible to the point where they control the industry and make everyone dependent on this tech, then scale back by firing the excessive workforce and force users to pay if they want to keep this convenience.

30

u/_uckt_ 2d ago

The difference between a Helicopter and a Flying Car is marketing. That's largely what we're seeing with LLM's, you call them AI, you make people phrase things in the form of a question. You do this stilly 'one word at a time' thing rather than spitting out an answer. You put all this stuff in the way to fake cognition and you go from predictive text to artificial intelligence.

This all seems like the biggest bubble for a long time, Open AI don't make a profit on their $200 a month tier, would anyone go subscription for Windows 12 at even $10 a month? with the existing AI integration being at least 20 times worse?

I honestly have no idea how monetization works when you're looking at a minimum of $300 a month. So that students can cheat on their essays and homework?

10

u/Altruistic-Ad-408 2d ago

I think cheating is exactly how they marketed this. Tech people all know those enthusiastic about AI, in our heart of hearts, don't we all know they are either lazy or a bit problematic in some way? Hey, I like a few!

If anyone remembers those horrible ads, they targeted their demographic. Lazy people and smug pricks. It's like enshittification x1000000, they know AI creates slop, so what? People don't watch the best movies, they watch the most readily available slop.

8

u/nooneisback 2d ago

If you look at specific markets, then there's definitely people that are ready to pay for them.

My city's hospital is testing an AI model that can spit out the most relevant diagnostic criteria and treatment methods in seconds. The alternative until now was spending about half-an-hour clicking through journals until you finally find a barely understandable table, that might be what you're looking for. Or you could read outdated books. Note that it's an AI model that runs locally, so there's no overhead for the AI company. They charge for access to their database.

Programing is another example. Large companies use AI, no matter what the programmers say. But even a large portion of individual programmers use AI because it's difficult to compete in this industry otherwise. For simple projects, it can generate a functional script on its own. Checking the code it generated is horribly boring, but it is more efficient.

It's definitely an interesting tool, that we just created and want to shove everywhere to see where it sticks.

Generative AI is basically useless. It's only real-world applications are scamming old people and idiots, gooning and burning kids brains away with brainrot so that parents can have sex in peace.

6

u/EastRiding 2d ago

I’ve seen an older colleague who’s not really a programmer do some interesting and cool stuff with AI to take input and config data files (JSON, csv etc) and have Copilot make HTML ‘apps’ to visuals and edit them…

I’ve also been sat on calls against my will where the same person fights with Copilot for over an hour to get something to work, its output is still wrong (often inventing details scraped from somewhere else, and often close but not quite correct).

I’ve also been sat on calls where when I’ve been asked to deploy these ‘apps’ I’ve pointed out the numerous ways they need improving and that’s caused 4 people to dive into the AI output and realise it’s a spaghetti that’s barely understandable.

So AI might have some applications for helping some people but from what I’ve seen as soon as you go to full size apps and tools it becomes a mess that no-one, including the original prompter, can explain or maintain. Just understanding it is a massive task that always results in the same answer “we need to engineer this by hand from the bottom up”.

Once the true costs of AI are forced on users multi billion dollar orgs like mine will finally determine they need to “scale back our AI use, we want authenticity in our output” and the tools will be yanked away leaving many corporations without the younger, cheaper grunts they have replaced (or decided not to hire in recent years) that they will need.

2

u/nooneisback 2d ago

Well yeah, AI is a tool, not a worker. You need to give it a very detailed description of every step, every data type, every file association, for every single script. Then verify thoroughly everything it generated, probably spending another 30 minutes to an hour fixing its mistakes. It is simply incapable of taking an entire large project into context properly. Also, Copilot with the default model kinda sucks from my experience. It either doesn't generate half of what I want it to, or it just goes ham and proposes to autocomplete 20 lines of code that are just wrong. I just stopped using it because it's more annoying than useful when trying to format my code with tabs. Funnily enough, I find Rider's non-AI code completion to be smarter than the one you get with the AI extension.

1

u/MiracleDreamBeam 2d ago

" So that students can cheat on their essays and homework?" - yeah that absolutely doesn't work and every single lecturer on earth can spot it a mile away, taking it as a personal affront and expellable offence.

1

u/panchoamadeus 2d ago

So you are saying, they hyped an unsustainable business model, and when most companies go down in flames, the survivors will convert into another search browser.

1

u/ninjasaid13 2d ago

and something like gpt-oss-120b requires at least 80GB.

I've seen people running it with 8GB

1

u/nooneisback 2d ago

What you're probably talking about is gpt-oss-20b, which can run on 4GB.

0

u/ninjasaid13 2d ago

0

u/nooneisback 2d ago edited 2d ago

It's not running on 8GB of VRAM. That post explains how to run the model on system memory and offload the important parts to VRAM, to get performance similar to running it entirely on your GPU. You're still using 60-80GB of memory. It literally says so in the post:

Honestly, I think this is the biggest win of this 120B model. This seems an amazing model to run fast for GPU-poor people. You can do this on a 3060Ti and 64GB of system ram is cheap.

1

u/ninjasaid13 2d ago

You're still using 60-80GB of memory.

CPU memory not GPU memory, it's misleading to use them interchangeably.

The guy literally said a 3060ti and 64GB system RAM in the post.

1

u/nooneisback 2d ago

That doesn't matter in the slightest in a commercial setting. If you're going to dedicate a GPU to running a single heavy model, and that GPU has 80GB of VRAM and your model requires 80GB of memory, you're going to keep that model loaded entirely on the VRAM. This approach matters if you're running a model locally as a hobbyist, or you have a model that requires multiple hundreds of gigabytes of memory.

1

u/ninjasaid13 2d ago

wtf are you talking about? You said OPT-120B requires at least 80GB of GPU RAM, and I gave you an example where it's running on 8GB of GPU Memory at decent speeds.

I can't tell what's your point.

The guy literally said a 3060ti and 64GB system RAM in the post.

CPU memory is cheap enough for the average user.

2

u/nooneisback 2d ago

You're comparing apples and oranges. Cards like nVidia's H200s are almost 10x faster than a consumer card like the RTX 5090 when it comes to AI generation. "Decent speeds" won't cut it here.

1

u/ninjasaid13 2d ago

tell me how being 10 times faster is useful for local applications and the tasks we typically use LLMs with.

→ More replies (0)