Stubsack: weekly thread for sneers not worth an entire post, week ending 1st February 2026
-
And of all possible things to implement, they chose Matrix. lol and lmao.
The interesting thing in this case for me is how did anyone think it was a good idea to draw attention to their placeholder code with a blog post. Like how did they went all the way to vibe a full post without even cursorily glancing at the slop commits.
I’m convinced by now that at least mild forms of “AI psychosis” affect all chatbots users; after a period of time interacting with what Angela Collier called “Dr. Flattery the Always Wrong Robot”, people will hallucinate fully working projects without even trying to test whether it compiles.
-
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)
I gave the new ChatGPT Health access to 29 million steps and 6 million heartbeat measurements [“a decade of my Apple Watch data”]. It drew questionable conclusions that changed each time I asked.
WaPo. Paywalled but I like how everything I need to know is already in the blurb above.
-
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)
enjoy this glorious piece of LW lingo
Aumann’s agreement is pragmatically wrong. For bounded levels of compute you can’t necessarily converge on the meta level of evidence convergence procedures.
no I don’t know what it means, and I don’t want it to be explained to me. Just let me bask in its inscrutibility.
-
enjoy this glorious piece of LW lingo
Aumann’s agreement is pragmatically wrong. For bounded levels of compute you can’t necessarily converge on the meta level of evidence convergence procedures.
no I don’t know what it means, and I don’t want it to be explained to me. Just let me bask in its inscrutibility.
retains the same informational content after running through rot13
-
enjoy this glorious piece of LW lingo
Aumann’s agreement is pragmatically wrong. For bounded levels of compute you can’t necessarily converge on the meta level of evidence convergence procedures.
no I don’t know what it means, and I don’t want it to be explained to me. Just let me bask in its inscrutibility.
oh man, it’s Aumann’s
-
enjoy this glorious piece of LW lingo
Aumann’s agreement is pragmatically wrong. For bounded levels of compute you can’t necessarily converge on the meta level of evidence convergence procedures.
no I don’t know what it means, and I don’t want it to be explained to me. Just let me bask in its inscrutibility.
this sounds exactly like the sentence right before “they have played us for absolute fools!” in that meme.
-
enjoy this glorious piece of LW lingo
Aumann’s agreement is pragmatically wrong. For bounded levels of compute you can’t necessarily converge on the meta level of evidence convergence procedures.
no I don’t know what it means, and I don’t want it to be explained to me. Just let me bask in its inscrutibility.
Are you trying to say that you are not regularly thinking about the meta level of evidence convergence procedures?
-
enjoy this glorious piece of LW lingo
Aumann’s agreement is pragmatically wrong. For bounded levels of compute you can’t necessarily converge on the meta level of evidence convergence procedures.
no I don’t know what it means, and I don’t want it to be explained to me. Just let me bask in its inscrutibility.
Tbh, this is pretty convincing, I agree a lot more with parts of the LW space now. (Just look at the title, the content isn’t that interesting).
-
I gave the new ChatGPT Health access to 29 million steps and 6 million heartbeat measurements [“a decade of my Apple Watch data”]. It drew questionable conclusions that changed each time I asked.
WaPo. Paywalled but I like how everything I need to know is already in the blurb above.
Archive link, but you can extrapolate the whole article from the blurb. Mostly. It’s actually slightly worse than the blurb suggests.
-
enjoy this glorious piece of LW lingo
Aumann’s agreement is pragmatically wrong. For bounded levels of compute you can’t necessarily converge on the meta level of evidence convergence procedures.
no I don’t know what it means, and I don’t want it to be explained to me. Just let me bask in its inscrutibility.
The sad thing is I have some idea of what it’s trying to say. One of the many weird habits of the Rationalists is that they fixate on a few obscure mathematical theorems and then come up with their own ideas of what these theorems really mean. Their interpretations may be only loosely inspired by the actual statements of the theorems, but it does feel real good when your ideas feel as solid as math.
One of these theorems is Aumann’s agreement theorem. I don’t know what the actual theorem says, but the LW interpretation is that any two “rational” people must eventually agree on every issue after enough discussion, whatever rational means. So if you disagree with any LW principles, you just haven’t read enough 20k word blog posts. Unfortunately, most people with “bounded levels of compute” ain’t got the time, so they can’t necessarily converge on the meta level of, never mind, screw this, I’m not explaining this shit. I don’t want to figure this out anymore.
-
The sad thing is I have some idea of what it’s trying to say. One of the many weird habits of the Rationalists is that they fixate on a few obscure mathematical theorems and then come up with their own ideas of what these theorems really mean. Their interpretations may be only loosely inspired by the actual statements of the theorems, but it does feel real good when your ideas feel as solid as math.
One of these theorems is Aumann’s agreement theorem. I don’t know what the actual theorem says, but the LW interpretation is that any two “rational” people must eventually agree on every issue after enough discussion, whatever rational means. So if you disagree with any LW principles, you just haven’t read enough 20k word blog posts. Unfortunately, most people with “bounded levels of compute” ain’t got the time, so they can’t necessarily converge on the meta level of, never mind, screw this, I’m not explaining this shit. I don’t want to figure this out anymore.
The Wikipedia article is cursed
-
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)
Is Pee Stored in the Balls? Vibe Coding Science with OpenAI’s Prism
Carl T. Bergstrom (@carlbergstrom.com)
So initial experiments with Open AI's vibe-coding science tool Prism are going about as well as expected.
Bluesky Social (bsky.app)
-
Is Pee Stored in the Balls? Vibe Coding Science with OpenAI’s Prism
Carl T. Bergstrom (@carlbergstrom.com)
So initial experiments with Open AI's vibe-coding science tool Prism are going about as well as expected.
Bluesky Social (bsky.app)
Ow! My Balls
-
A few people in LessWrong and Effectlve Altruism seem to want Yud to stick in the background while they get on with organizing his teachings into doctrine, dumping the awkward ones down the memory hole, and organizing a movement that can last when he goes to the Great Anime Convention in the Sky. In 2022 someone on the EA forum posted On Deference and Yudkowsky’s AI Risk Estimates (ie. “Yud has been bad at predictions in the past so we should be skeptical of his predictions today”)
that post got way funnier with Eliezer’s recent twitter post about “EAs developing more complex opinions on AI other than itll kill everyone is a net negative and cancelled out all the good they ever did”
-
Is Pee Stored in the Balls? Vibe Coding Science with OpenAI’s Prism
Carl T. Bergstrom (@carlbergstrom.com)
So initial experiments with Open AI's vibe-coding science tool Prism are going about as well as expected.
Bluesky Social (bsky.app)
Chris Lintott (@chrislintott.bsky.social):
We’re getting so many journal submissions from people who think ‘it kinda works’ is the standard to aim for.
Research Notes of the AAS in particular, which was set up to handle short, moderated contributions especially from students, is getting swamped. Often the authors clearly haven’t read what they’ve submitting, (Descriptions of figures that don’t exist or don’t show what they purport to)
I’m also getting wild swings in topic. A rejection of one paper will instantly generate a submission of another, usually on something quite different.
Many of these submissions are dense with equations and pseudo-technological language which makes it hard to give rapid, useful feedback. And when I do give feedback, often I get back whatever their LLM says.
Including the very LLM responses like ‘Oh yes, I see that <thing that was fundamental to the argument> is wrong, I’ve removed it. Here’s something else’
Research Notes is free to publish in and I think provides a very valuable service to the community. But I think we’re a month or two from being completely swamped.
-
Chris Lintott (@chrislintott.bsky.social):
We’re getting so many journal submissions from people who think ‘it kinda works’ is the standard to aim for.
Research Notes of the AAS in particular, which was set up to handle short, moderated contributions especially from students, is getting swamped. Often the authors clearly haven’t read what they’ve submitting, (Descriptions of figures that don’t exist or don’t show what they purport to)
I’m also getting wild swings in topic. A rejection of one paper will instantly generate a submission of another, usually on something quite different.
Many of these submissions are dense with equations and pseudo-technological language which makes it hard to give rapid, useful feedback. And when I do give feedback, often I get back whatever their LLM says.
Including the very LLM responses like ‘Oh yes, I see that <thing that was fundamental to the argument> is wrong, I’ve removed it. Here’s something else’
Research Notes is free to publish in and I think provides a very valuable service to the community. But I think we’re a month or two from being completely swamped.
One of the great tragedies of AI and science is that the proliferation of garbage papers and journals is creating pressure to return to more closed systems based on interpersonal connections and established prestige hierarchies that had only recently been opened up somewhat to greater diversity.
-
The Wikipedia article is cursed
Honestly even the original paper is a bit silly, are all game theory mathematics papers this needlessly farfetched?
-
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)
-
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)
Kyle Hill has gone full doomer after reading too much Big Yud and the Yud & Soares book. His latest video is titled “Artificial Superintelligence Must Be Illegal.” Previously, on Awful, he was cozying up to effective altruists and longtermists. He used to have a robotic companion character who would banter with him, but it seems like he’s no longer in that sort of jocular mood; he doesn’t trust his waifu anymore.
-
Kyle Hill has gone full doomer after reading too much Big Yud and the Yud & Soares book. His latest video is titled “Artificial Superintelligence Must Be Illegal.” Previously, on Awful, he was cozying up to effective altruists and longtermists. He used to have a robotic companion character who would banter with him, but it seems like he’s no longer in that sort of jocular mood; he doesn’t trust his waifu anymore.
Wasn’t he on YouTube trying to convince people that Nuclear Energy is Fine Actually? Figures.