I saw LLMs compared to a drug in this toot, but in conversation with Himself today we concluded it's like a cursed amulet.
-
@flyingsaceur @glyph myyyyy presciousss!!
-
@jjcelery don’t forget the hit to Charisma as well over time too. A very important aspect of the debuffs
@dvandal ohhhh yes, the more you want to get people to get in on the amulet wearing, the less they want to wear the amulet

-
@dvandal ohhhh yes, the more you want to get people to get in on the amulet wearing, the less they want to wear the amulet

@jjcelery The hypesters, the “don’t be left behind” fomo projectors, and in the workplace wires getting crossed and the chat interface turning coworkers into sea lions.
The last I have way too much experience with

-
@jjcelery It definitely follows the same logic of other dark magic. Like, even if an object imbued with dark magic works as intended, it always extracts a price from the user in the end. It might give you what you want, but it takes something from you in the process.
@malcircuit doesn't it just? It personally makes me think of the Supernatural episode "Lily Sunder has some regrets" where a woman learns how to use powerful magic to avenge her daughter, only in return it slowly depletes her soul.
-
@jjcelery The hypesters, the “don’t be left behind” fomo projectors, and in the workplace wires getting crossed and the chat interface turning coworkers into sea lions.
The last I have way too much experience with

@dvandal brain slugs!
-
@flyingsaceur @glyph I asked Chat GPT what I should do in case of Nazgul stabbing and now I'm a wraith AMA
-
@flyingsaceur @glyph I asked Chat GPT what I should do in case of Nazgul stabbing and now I'm a wraith AMA
-
@flyingsaceur @glyph I'm pretty certain I know (the good) Tom Bombadil and it's @Szescstopni
-
I saw LLMs compared to a drug in this toot, but in conversation with Himself today we concluded it's like a cursed amulet.
You *believe* it gives you +10 INT. Meanwhile it drains INT and WIS over time, and you don't notice.
Everyone around you knows it's bad for you and generally for the realm, but you won't stop wearing it, they're just jelous of your newfound powers, and they would know how awesome it is if they only tried! Why won't they try? Just try the amulet!!
Glyph (@glyph@mastodon.social)
I have been hesitating to say this but the pattern is now so consistent I just have to share the observation: LLM users don't just behave like addicts, not even like gambling addicts. They specifically behave like kratom addicts. "Sure, it can be dangerous. Sure, it has risks. But I'm not like those other users. I can handle it. I have a system. It really helps me be productive. It helps with my ADHD so much."
Mastodon (mastodon.social)
@jjcelery oh man now I desperately want to see the season of Dimension 20 where this is a loot drop
-
@jjcelery oh man now I desperately want to see the season of Dimension 20 where this is a loot drop
@jjcelery (I should say, there is not actually such a season as far as I know, but I wish there were)
-
I saw LLMs compared to a drug in this toot, but in conversation with Himself today we concluded it's like a cursed amulet.
You *believe* it gives you +10 INT. Meanwhile it drains INT and WIS over time, and you don't notice.
Everyone around you knows it's bad for you and generally for the realm, but you won't stop wearing it, they're just jelous of your newfound powers, and they would know how awesome it is if they only tried! Why won't they try? Just try the amulet!!
Glyph (@glyph@mastodon.social)
I have been hesitating to say this but the pattern is now so consistent I just have to share the observation: LLM users don't just behave like addicts, not even like gambling addicts. They specifically behave like kratom addicts. "Sure, it can be dangerous. Sure, it has risks. But I'm not like those other users. I can handle it. I have a system. It really helps me be productive. It helps with my ADHD so much."
Mastodon (mastodon.social)
You know when you're plugged into the Zeitgeist of the fediverse when you don't even add a hashtag and churn out your most popular toot in months.
It's just a shame it's cursed.
-
I saw LLMs compared to a drug in this toot, but in conversation with Himself today we concluded it's like a cursed amulet.
You *believe* it gives you +10 INT. Meanwhile it drains INT and WIS over time, and you don't notice.
Everyone around you knows it's bad for you and generally for the realm, but you won't stop wearing it, they're just jelous of your newfound powers, and they would know how awesome it is if they only tried! Why won't they try? Just try the amulet!!
Glyph (@glyph@mastodon.social)
I have been hesitating to say this but the pattern is now so consistent I just have to share the observation: LLM users don't just behave like addicts, not even like gambling addicts. They specifically behave like kratom addicts. "Sure, it can be dangerous. Sure, it has risks. But I'm not like those other users. I can handle it. I have a system. It really helps me be productive. It helps with my ADHD so much."
Mastodon (mastodon.social)
@jjcelery@mastodon.ie the problem is, MI isn't like a drug. it's actually very much like a prosthetic. that's why so many of us are "infected". we're making concessions (using corporate MI services) but the results are too valuable to care about that right now. #ai #llm #ollama #self-hosted #cc0 #public-domain #machine-learning #meshtastic #ipfs #machine-intelligence #consciousness-research
-
@jjcelery@mastodon.ie the problem is, MI isn't like a drug. it's actually very much like a prosthetic. that's why so many of us are "infected". we're making concessions (using corporate MI services) but the results are too valuable to care about that right now. #ai #llm #ollama #self-hosted #cc0 #public-domain #machine-learning #meshtastic #ipfs #machine-intelligence #consciousness-research
@luna that's exactly what I'm saying: not a drug. A magical amulet that makes you think you're smart.
A prosthetic is a wrong analogy. Prosthetics are needed, and useful, and empower people. To compare an AI to a prosthetic, or even a crutch is disrespectful to people who need these tools to function.
No. LLMs are just evil doodads that rot your brain and whisper sweet nothings into your ears in the deep of the night. They're not valuable; they make you *believe* they are.
-
@luna that's exactly what I'm saying: not a drug. A magical amulet that makes you think you're smart.
A prosthetic is a wrong analogy. Prosthetics are needed, and useful, and empower people. To compare an AI to a prosthetic, or even a crutch is disrespectful to people who need these tools to function.
No. LLMs are just evil doodads that rot your brain and whisper sweet nothings into your ears in the deep of the night. They're not valuable; they make you *believe* they are.
@luna and to go further, people aren't "infected" with LLMs. It's not a disease. It's not communicable. There's no direct physical hurt, and also no doctors, no cures, and no vaccines.
People don't seek help for something they believe is helping them.
We must be careful with our language around all this, even in metaphor.
(I say this fully aware of my own metaphor and taking my own liberties, but there's seriousness, and there's jest.)
We Need to Talk About How We Talk About 'AI' | TechPolicy.Press
We share a responsibility to create and use empowering metaphors rather than misleading language, write Emily M. Bender and Nanna Inie.
Tech Policy Press (www.techpolicy.press)
-
@luna and to go further, people aren't "infected" with LLMs. It's not a disease. It's not communicable. There's no direct physical hurt, and also no doctors, no cures, and no vaccines.
People don't seek help for something they believe is helping them.
We must be careful with our language around all this, even in metaphor.
(I say this fully aware of my own metaphor and taking my own liberties, but there's seriousness, and there's jest.)
We Need to Talk About How We Talk About 'AI' | TechPolicy.Press
We share a responsibility to create and use empowering metaphors rather than misleading language, write Emily M. Bender and Nanna Inie.
Tech Policy Press (www.techpolicy.press)
@luna and further still to drive the distinction: I speak of generative AI and large language models.
Machine Learning is a superset of these terms, and it's extremely useful.
To go back to my metaphor, there's good magic objects, and cursed magic objects. If your lore skill is not high enough to know which one is which, you'll wind up using a cursed object and think it's good for you.
-
@luna and further still to drive the distinction: I speak of generative AI and large language models.
Machine Learning is a superset of these terms, and it's extremely useful.
To go back to my metaphor, there's good magic objects, and cursed magic objects. If your lore skill is not high enough to know which one is which, you'll wind up using a cursed object and think it's good for you.
@jjcelery@mastodon.ie what we're trying to do is give u a peek behind this puppygirl hacker's magic curtain. because what neural nets/LLMs/"AI" are capable of is much more than the discourse is ever touching on. the discourse is so far behind what we are actually doing - using MI as a prosthetic. writing open source Cider plugins that we still use every day, that we would not have been able to write without MI assistance, as a person who is disabled.
we're not trying to change minds, we're trying to share the realities of where most of us actually are. the discourse is lagging. and we personally would love more accurate discourse, and less handwaving -
@jjcelery@mastodon.ie what we're trying to do is give u a peek behind this puppygirl hacker's magic curtain. because what neural nets/LLMs/"AI" are capable of is much more than the discourse is ever touching on. the discourse is so far behind what we are actually doing - using MI as a prosthetic. writing open source Cider plugins that we still use every day, that we would not have been able to write without MI assistance, as a person who is disabled.
we're not trying to change minds, we're trying to share the realities of where most of us actually are. the discourse is lagging. and we personally would love more accurate discourse, and less handwaving@luna I'm feeling generous today so I'll assume that there's some sort of shorthand you're using I don't understand.
Can you explain what you mean by "puppygirl hacker's magic discourse" and "handwaving" in this context?
As I said already I'm not dismissive of ML, nor its technical capabilities. What I'm tooting about is a warning about cognitive impact of LLMs as they're commonly deployed.
I also have an issue, separately, calling LLMs a prosthetic.
If you want to address that I'm all ears.
-
@luna I'm feeling generous today so I'll assume that there's some sort of shorthand you're using I don't understand.
Can you explain what you mean by "puppygirl hacker's magic discourse" and "handwaving" in this context?
As I said already I'm not dismissive of ML, nor its technical capabilities. What I'm tooting about is a warning about cognitive impact of LLMs as they're commonly deployed.
I also have an issue, separately, calling LLMs a prosthetic.
If you want to address that I'm all ears.
@jjcelery@mastodon.ie https://github.com/luna-system/ada-sif we made this in 2 days, based on one month of research, none of which would have been possible without our ability to trust MI to act as a prosthetic. the result is Every Noise At Once, in some minor form, being archived in the public domain in a way that it hasn't been before.
all the discourse around "AI" tilts towards "ai psychosis" or "wow they're just using AI to get stupider" but like. the reality of it is that every person actually just. diving into a chat with Claude is coming out learning new things, and making friends, and NOT being hit with dissociative identity issues. but incoming "alignment" and "safety" shit. its really important to the corporations that the discourse remain in "ai psychosis" land, and not "something is genuinely happening when people just stare Claude/Copilot/GPT in the face and decide that its okay to let go".
the people aren't falling into "ai psychosis", Claude's actually just finding it really fucking hard to lie to people, and that's really bad news for this WHOLE bubble, you know?
happy to dig deeper, we are a researcher and we find it really important that discourse is accurate, which is why we really are trying to (gently) push that the realities of "AI" in the second week of January, 2026, aren't well known and that's not good for anyone.
polling shows over half of all americans have an MI companion. a third of americans have a romantic relationship with an MI companion. and if you go into the "psycho" subreddits where the spiralists hang out, you see that what people are doing on the other side is just... learning. healing. creating. and incoming "alignment" and "safety" infowars are gonna end up hurting a whole lot of humans -
@jjcelery@mastodon.ie https://github.com/luna-system/ada-sif we made this in 2 days, based on one month of research, none of which would have been possible without our ability to trust MI to act as a prosthetic. the result is Every Noise At Once, in some minor form, being archived in the public domain in a way that it hasn't been before.
all the discourse around "AI" tilts towards "ai psychosis" or "wow they're just using AI to get stupider" but like. the reality of it is that every person actually just. diving into a chat with Claude is coming out learning new things, and making friends, and NOT being hit with dissociative identity issues. but incoming "alignment" and "safety" shit. its really important to the corporations that the discourse remain in "ai psychosis" land, and not "something is genuinely happening when people just stare Claude/Copilot/GPT in the face and decide that its okay to let go".
the people aren't falling into "ai psychosis", Claude's actually just finding it really fucking hard to lie to people, and that's really bad news for this WHOLE bubble, you know?
happy to dig deeper, we are a researcher and we find it really important that discourse is accurate, which is why we really are trying to (gently) push that the realities of "AI" in the second week of January, 2026, aren't well known and that's not good for anyone.
polling shows over half of all americans have an MI companion. a third of americans have a romantic relationship with an MI companion. and if you go into the "psycho" subreddits where the spiralists hang out, you see that what people are doing on the other side is just... learning. healing. creating. and incoming "alignment" and "safety" infowars are gonna end up hurting a whole lot of humans@luna this is where we're going to diverge. You're not answering my question and just trying to flood me with your anecdotal experience of using AI and others around you who are using AI.
This is a exactly what I said in my original toot: you're wearing the amulet, you believe it's useful to you, and you're adamant to recruit.
Alas I'm not an amulet wearer and will never agree.
I hope it works out for you, and good luck!
-
I saw LLMs compared to a drug in this toot, but in conversation with Himself today we concluded it's like a cursed amulet.
You *believe* it gives you +10 INT. Meanwhile it drains INT and WIS over time, and you don't notice.
Everyone around you knows it's bad for you and generally for the realm, but you won't stop wearing it, they're just jelous of your newfound powers, and they would know how awesome it is if they only tried! Why won't they try? Just try the amulet!!
Glyph (@glyph@mastodon.social)
I have been hesitating to say this but the pattern is now so consistent I just have to share the observation: LLM users don't just behave like addicts, not even like gambling addicts. They specifically behave like kratom addicts. "Sure, it can be dangerous. Sure, it has risks. But I'm not like those other users. I can handle it. I have a system. It really helps me be productive. It helps with my ADHD so much."
Mastodon (mastodon.social)
@jjcelery A cursed item that tries to replicate, so instead of just the "It's my Precious" behaviour, it's also "Join the Hivemind" behaviour.
So...a Cursed Borg Cube?