"The problem with LLMs is that you eventually run out of other people's work to steal"
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley "The problem with LLMs is that you eventually run out of other people's advance RAM order checks"
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley And eventually the stuff you steal is crappy stuff you, yourself, generated and you're stuck in a very expensive feedback loop.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
Yes, and no. Eavesdropping to our everyday communications in social networks and private conversation on our smartphones or digital assistants (Alexa &c) will give the LLMs permanently new input.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley I don't know if there’s an English translation, but that's what I felt when reading the SF novel “Vie[TM]” by Jean Baret: algorithms that go around in circles for a world that goes around in circles.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
Using LLMs is the intellectual equivalent of sawing the branch you're sitting on.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley they can start eating each other, then! Cool!
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley To paraphrase an old dead ghoul. "There is no such thing as copyright." /s
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
Which is why they still beg for work from actual human artists and shame them for trying to protect their work, they need fresh data constantly or their machines start to get inbred.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
This has breached containment spectacularly and I'm now having it explained back to me

-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley well said
-
This has breached containment spectacularly and I'm now having it explained back to me

@pikesley [John Peel voice]: Coming up, it's Slop Ouroboros playing live in the studio. You won't find their new EP The RAM Wafers Are All Mine in shops anywhere...
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley That was sublime. Thank you for that inversion of neoliberalism.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley Exactly? And then they may begin to make up their own "work" and that's potentially very dangerous to the human race.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
The fact that one could substitute the acronym "LLM" with the word "artist" without the quote losing sense, is eeryly suggestive.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley and would "we" even be able to know the difference. Slop looks like slop.
-
Using LLMs is the intellectual equivalent of sawing the branch you're sitting on.
Someone should explain this to my coworkers. I tried, it's like talking against a wall.
-
"The problem with LLMs is that you eventually run out of other people's work to steal"
@pikesley see recursive pollution and model collapse. Eg https://berryvilleiml.com/2026/01/10/recursive-pollution-and-model-collapse-are-not-the-same/
But also see this paper arguing that collapse does not happen if you accumulate training data rather than replace old data with new data; instead, it converges towards a limited level of errors.
https://arxiv.org/pdf/2404.01413 -
This has breached containment spectacularly and I'm now having it explained back to me

@pikesley yes, it's also known as the "training data harvester" dark pattern.
Unethical Software Engineering
Unethical Software Engineering exposes how to manipulate, exploit, and profit at users' expense. From covert email trackers and fake reviewers to cybersquatters and bot pretenders, this eye-opening book collects 22 dark patterns eroding accountability, trust, truth, transparency and privacy.
(leanpub.com)
-
Using LLMs is the intellectual equivalent of sawing the branch you're sitting on.
@FediThing @pikesley I use the expression "poisoning the well of knowledge".
-
This has breached containment spectacularly and I'm now having it explained back to me

Attracting a lot of r/iamverysmart types who clearly do not know the context of the quote