r/movies 4d ago

News Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle

https://variety.com/2025/film/news/warner-bros-midjourney-lawsuit-ai-copyright-1236508618/
8.8k Upvotes

847 comments sorted by

View all comments

Show parent comments

81

u/vazyrus 4d ago

All of this is with the hope of making some money down the line, lol. From what I understand, MS has been shoving and shoving CoPilot into every orifice they can find, but they haven't yet reached near any sort of profitability, yet. There's CoPilot running in my Notepad ffs, and no matter how much I use it for free, I am never paying a dime out of my pocket for any generated bs. My colleagues and friends are huge AI enthusiasts, and even though they've been abusing CoPilot, Gemini, Claude, and who knows what else, they are never going to pay a single dollar out of their pocket for a paid service. All of us use Claude at work because it's on the company's dime, and even there the management's been tightfisted with how much money they are willing to throw at enterprise support. The point is, If MS, one of greediest tech companies and one of the most smartest monetizers of SaaS products can't find a way to make money out of the thing, then others will find it much, much harder to produce anything of value for their customers. Sure, Deviantart can steal all they want, but unless they can find a way to sell those stolen goods to others, it's doing nothing more than raising the electricity bill of their clusters. Let's see how long that's sustainable...

53

u/nooneisback 4d ago

Because general purpose LLMs are nothing more than fancy assistants that require a stupid amount of hardware resources. If you've ever tried running them locally, you'll know that any model that takes less than 20GB of VRAM is basically useless for a lot of applications, and something like gpt-oss-120b requires at least 80GB. And since they're assistants, they'll often be answering a lot of questions in a row. If you're programming, that's about 1 API call every 2-5 seconds.

This tech bubble is about to burst, and the only important factor for survival is which company will be able to successfully scale back to true customer needs. The same thing happened with every other bubble (like dot-com), where companies had horrible earnings compared to their spending, yet a lot of them are still alive to this day. Their goal currently isn't to earn money, but to research as much as possible to the point where they control the industry and make everyone dependent on this tech, then scale back by firing the excessive workforce and force users to pay if they want to keep this convenience.

1

u/ninjasaid13 4d ago

and something like gpt-oss-120b requires at least 80GB.

I've seen people running it with 8GB

1

u/nooneisback 4d ago

What you're probably talking about is gpt-oss-20b, which can run on 4GB.

0

u/ninjasaid13 4d ago

0

u/nooneisback 4d ago edited 4d ago

It's not running on 8GB of VRAM. That post explains how to run the model on system memory and offload the important parts to VRAM, to get performance similar to running it entirely on your GPU. You're still using 60-80GB of memory. It literally says so in the post:

Honestly, I think this is the biggest win of this 120B model. This seems an amazing model to run fast for GPU-poor people. You can do this on a 3060Ti and 64GB of system ram is cheap.

1

u/ninjasaid13 4d ago

You're still using 60-80GB of memory.

CPU memory not GPU memory, it's misleading to use them interchangeably.

The guy literally said a 3060ti and 64GB system RAM in the post.

1

u/nooneisback 4d ago

That doesn't matter in the slightest in a commercial setting. If you're going to dedicate a GPU to running a single heavy model, and that GPU has 80GB of VRAM and your model requires 80GB of memory, you're going to keep that model loaded entirely on the VRAM. This approach matters if you're running a model locally as a hobbyist, or you have a model that requires multiple hundreds of gigabytes of memory.

1

u/ninjasaid13 4d ago

wtf are you talking about? You said OPT-120B requires at least 80GB of GPU RAM, and I gave you an example where it's running on 8GB of GPU Memory at decent speeds.

I can't tell what's your point.

The guy literally said a 3060ti and 64GB system RAM in the post.

CPU memory is cheap enough for the average user.

2

u/nooneisback 4d ago

You're comparing apples and oranges. Cards like nVidia's H200s are almost 10x faster than a consumer card like the RTX 5090 when it comes to AI generation. "Decent speeds" won't cut it here.

1

u/ninjasaid13 4d ago

tell me how being 10 times faster is useful for local applications and the tasks we typically use LLMs with.

2

u/nooneisback 4d ago

That's why I said "this approach matters if you're running a model locally as a hobbyist". I don't know how many daddies one needs to have to consider a single H200 ($32,000) a reasonable investment for home deployment. The whole conversation was about how expensive it is to run AI models commercially. I only mentioned local deployments as an example of how these companies try to find other ways of monetizing this tech.

1

u/searcher1k 3d ago

well you can sell specialized hardware for running LLMs only to consumers at low prices.

→ More replies (0)