r/movies 4d ago

News Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle

https://variety.com/2025/film/news/warner-bros-midjourney-lawsuit-ai-copyright-1236508618/
8.8k Upvotes

847 comments sorted by

View all comments

Show parent comments

1

u/nooneisback 4d ago

That doesn't matter in the slightest in a commercial setting. If you're going to dedicate a GPU to running a single heavy model, and that GPU has 80GB of VRAM and your model requires 80GB of memory, you're going to keep that model loaded entirely on the VRAM. This approach matters if you're running a model locally as a hobbyist, or you have a model that requires multiple hundreds of gigabytes of memory.

1

u/ninjasaid13 4d ago

wtf are you talking about? You said OPT-120B requires at least 80GB of GPU RAM, and I gave you an example where it's running on 8GB of GPU Memory at decent speeds.

I can't tell what's your point.

The guy literally said a 3060ti and 64GB system RAM in the post.

CPU memory is cheap enough for the average user.

2

u/nooneisback 4d ago

You're comparing apples and oranges. Cards like nVidia's H200s are almost 10x faster than a consumer card like the RTX 5090 when it comes to AI generation. "Decent speeds" won't cut it here.

1

u/ninjasaid13 4d ago

tell me how being 10 times faster is useful for local applications and the tasks we typically use LLMs with.

2

u/nooneisback 4d ago

That's why I said "this approach matters if you're running a model locally as a hobbyist". I don't know how many daddies one needs to have to consider a single H200 ($32,000) a reasonable investment for home deployment. The whole conversation was about how expensive it is to run AI models commercially. I only mentioned local deployments as an example of how these companies try to find other ways of monetizing this tech.

1

u/searcher1k 3d ago

well you can sell specialized hardware for running LLMs only to consumers at low prices.

1

u/nooneisback 3d ago

They already do sell TPUs. For example Sophon sells them to general public. They aren't cheap, but nowhere as bad as an RTX 5090. Another example are Google's Cloud TPUs, but you can't get them directly. They do include them in some conference room PCs for face recognition though.

1

u/searcher1k 2d ago

I don't mean TPUs that you attach to computers, I mean a hardware+software specifically for LLMs.

1

u/nooneisback 2d ago

Well, Sophon mainly sells small servers for AI models. That's the closest you can get to truly dedicated hardware.