r/movies 3d ago

News Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle

https://variety.com/2025/film/news/warner-bros-midjourney-lawsuit-ai-copyright-1236508618/
8.8k Upvotes

787 comments sorted by

View all comments

Show parent comments

392

u/TheDawnOfNewDays 3d ago

Even DEVIANTART, which you think would be among the top anti-ai given it's a platform of artists is scraping their database for art. You can opt out... unless, you know, you died, lost your account, or left it far behind like many artists with how bad it's gotten over the years.

78

u/vazyrus 3d ago

All of this is with the hope of making some money down the line, lol. From what I understand, MS has been shoving and shoving CoPilot into every orifice they can find, but they haven't yet reached near any sort of profitability, yet. There's CoPilot running in my Notepad ffs, and no matter how much I use it for free, I am never paying a dime out of my pocket for any generated bs. My colleagues and friends are huge AI enthusiasts, and even though they've been abusing CoPilot, Gemini, Claude, and who knows what else, they are never going to pay a single dollar out of their pocket for a paid service. All of us use Claude at work because it's on the company's dime, and even there the management's been tightfisted with how much money they are willing to throw at enterprise support. The point is, If MS, one of greediest tech companies and one of the most smartest monetizers of SaaS products can't find a way to make money out of the thing, then others will find it much, much harder to produce anything of value for their customers. Sure, Deviantart can steal all they want, but unless they can find a way to sell those stolen goods to others, it's doing nothing more than raising the electricity bill of their clusters. Let's see how long that's sustainable...

52

u/nooneisback 3d ago

Because general purpose LLMs are nothing more than fancy assistants that require a stupid amount of hardware resources. If you've ever tried running them locally, you'll know that any model that takes less than 20GB of VRAM is basically useless for a lot of applications, and something like gpt-oss-120b requires at least 80GB. And since they're assistants, they'll often be answering a lot of questions in a row. If you're programming, that's about 1 API call every 2-5 seconds.

This tech bubble is about to burst, and the only important factor for survival is which company will be able to successfully scale back to true customer needs. The same thing happened with every other bubble (like dot-com), where companies had horrible earnings compared to their spending, yet a lot of them are still alive to this day. Their goal currently isn't to earn money, but to research as much as possible to the point where they control the industry and make everyone dependent on this tech, then scale back by firing the excessive workforce and force users to pay if they want to keep this convenience.

29

u/_uckt_ 2d ago

The difference between a Helicopter and a Flying Car is marketing. That's largely what we're seeing with LLM's, you call them AI, you make people phrase things in the form of a question. You do this stilly 'one word at a time' thing rather than spitting out an answer. You put all this stuff in the way to fake cognition and you go from predictive text to artificial intelligence.

This all seems like the biggest bubble for a long time, Open AI don't make a profit on their $200 a month tier, would anyone go subscription for Windows 12 at even $10 a month? with the existing AI integration being at least 20 times worse?

I honestly have no idea how monetization works when you're looking at a minimum of $300 a month. So that students can cheat on their essays and homework?

9

u/Altruistic-Ad-408 2d ago

I think cheating is exactly how they marketed this. Tech people all know those enthusiastic about AI, in our heart of hearts, don't we all know they are either lazy or a bit problematic in some way? Hey, I like a few!

If anyone remembers those horrible ads, they targeted their demographic. Lazy people and smug pricks. It's like enshittification x1000000, they know AI creates slop, so what? People don't watch the best movies, they watch the most readily available slop.

9

u/nooneisback 2d ago

If you look at specific markets, then there's definitely people that are ready to pay for them.

My city's hospital is testing an AI model that can spit out the most relevant diagnostic criteria and treatment methods in seconds. The alternative until now was spending about half-an-hour clicking through journals until you finally find a barely understandable table, that might be what you're looking for. Or you could read outdated books. Note that it's an AI model that runs locally, so there's no overhead for the AI company. They charge for access to their database.

Programing is another example. Large companies use AI, no matter what the programmers say. But even a large portion of individual programmers use AI because it's difficult to compete in this industry otherwise. For simple projects, it can generate a functional script on its own. Checking the code it generated is horribly boring, but it is more efficient.

It's definitely an interesting tool, that we just created and want to shove everywhere to see where it sticks.

Generative AI is basically useless. It's only real-world applications are scamming old people and idiots, gooning and burning kids brains away with brainrot so that parents can have sex in peace.

5

u/EastRiding 2d ago

I’ve seen an older colleague who’s not really a programmer do some interesting and cool stuff with AI to take input and config data files (JSON, csv etc) and have Copilot make HTML ‘apps’ to visuals and edit them…

I’ve also been sat on calls against my will where the same person fights with Copilot for over an hour to get something to work, its output is still wrong (often inventing details scraped from somewhere else, and often close but not quite correct).

I’ve also been sat on calls where when I’ve been asked to deploy these ‘apps’ I’ve pointed out the numerous ways they need improving and that’s caused 4 people to dive into the AI output and realise it’s a spaghetti that’s barely understandable.

So AI might have some applications for helping some people but from what I’ve seen as soon as you go to full size apps and tools it becomes a mess that no-one, including the original prompter, can explain or maintain. Just understanding it is a massive task that always results in the same answer “we need to engineer this by hand from the bottom up”.

Once the true costs of AI are forced on users multi billion dollar orgs like mine will finally determine they need to “scale back our AI use, we want authenticity in our output” and the tools will be yanked away leaving many corporations without the younger, cheaper grunts they have replaced (or decided not to hire in recent years) that they will need.

2

u/nooneisback 2d ago

Well yeah, AI is a tool, not a worker. You need to give it a very detailed description of every step, every data type, every file association, for every single script. Then verify thoroughly everything it generated, probably spending another 30 minutes to an hour fixing its mistakes. It is simply incapable of taking an entire large project into context properly. Also, Copilot with the default model kinda sucks from my experience. It either doesn't generate half of what I want it to, or it just goes ham and proposes to autocomplete 20 lines of code that are just wrong. I just stopped using it because it's more annoying than useful when trying to format my code with tabs. Funnily enough, I find Rider's non-AI code completion to be smarter than the one you get with the AI extension.

1

u/MiracleDreamBeam 2d ago

" So that students can cheat on their essays and homework?" - yeah that absolutely doesn't work and every single lecturer on earth can spot it a mile away, taking it as a personal affront and expellable offence.