Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. TechTakes
  3. Stubsack: weekly thread for sneers not worth an entire post, week ending 1st February 2026

Stubsack: weekly thread for sneers not worth an entire post, week ending 1st February 2026

Scheduled Pinned Locked Moved TechTakes
techtakes
209 Posts 47 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • blakestacey@awful.systemsB blakestacey@awful.systems

    I think that’s more about Wolfram giving a clickbait headline to some dicking around he did in the name of “the ruliad”, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.

    The full ruliad—which appears at the foundations of physics, mathematics and much more—is the entangled limit of all possible computations. […] In representing all possible computations, the ruliad—like the “everything machine”—is maximally nondeterministic, so that it in effect includes all possible computational paths.

    Unrelated William James quote from 1907:

    The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.

    L This user is from outside of this forum
    L This user is from outside of this forum
    lagrangeinterpolator@awful.systems
    wrote last edited by
    #137

    Holy shit, I didn’t even read that part while skimming the later parts of that post. I am going to need formal mathematical definitions for “entangled limit”, “all possible computations”, “everything machine”, “maximally nondeterministic”, and “eye wash” because I really need to wash out my eyes. Coming up with technical jargon that isn’t even properly defined is a major sign of math crankery. It’s one thing to have high abstractions, but it is something else to say fancy words for the sake of making your prose sound more profound.

    1 Reply Last reply
    0
    • Sailor Sega SaturnS Sailor Sega Saturn

      New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168

      Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.

      nightsky@awful.systemsN This user is from outside of this forum
      nightsky@awful.systemsN This user is from outside of this forum
      nightsky@awful.systems
      wrote last edited by
      #138

      Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.

      1 Reply Last reply
      0
      • L lagrangeinterpolator@awful.systems

        I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

        Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

        I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

        Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

        Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

        If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

        A This user is from outside of this forum
        A This user is from outside of this forum
        aio@awful.systems
        wrote last edited by
        #139

        He straight up misstates how NP computation works. Essentially he writes that a nondeterministic machine M computes a function f if on every input x, there exists a path of M(x) which outputs f(x). But this is totally nonsense - it implies that a machine M which just branches repeatedly to produce every possible output of a given size “computes” every function of that size.

        1 Reply Last reply
        0
        • L lagrangeinterpolator@awful.systems

          I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

          Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

          I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

          Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

          Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

          If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

          A This user is from outside of this forum
          A This user is from outside of this forum
          aio@awful.systems
          wrote last edited by
          #140

          a lot of this “computational irreducibility” nonsense could be subsumed by the time hierarchy theorem which apparently Stephen has never heard of

          1 Reply Last reply
          0
          • L lagrangeinterpolator@awful.systems

            I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

            Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

            I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

            Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

            Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

            If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

            v0ldek@awful.systemsV This user is from outside of this forum
            v0ldek@awful.systemsV This user is from outside of this forum
            v0ldek@awful.systems
            wrote last edited by
            #141

            He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

            This is the fundamental mistake that students taking Intro to Computation Theory make and like the first step to teach them is to make them understand that P, NP, and other classes only make sense when you rigorously define the set of inputs and its encoding.

            1 Reply Last reply
            0
            • sc_griffith@awful.systemsS sc_griffith@awful.systems

              new epstein doc release. crashed out for like an hour last night after finding out jeffrey epstein may have founded /pol/ and that he listened to the nazi “the right stuff” podcast. he had a meeting with m00t and the same day moot opened /pol/

              N This user is from outside of this forum
              N This user is from outside of this forum
              nfultz@awful.systems
              wrote last edited by
              #142

              what the fuck

              EDIT

              checks out I guess

              https://www.justice.gov/epstein/files/DataSet 10/EFTA02003492.pdf https://www.justice.gov/epstein/files/DataSet 10/EFTA02004373.pdf

              1 Reply Last reply
              0
              • blakestacey@awful.systemsB blakestacey@awful.systems

                I think that’s more about Wolfram giving a clickbait headline to some dicking around he did in the name of “the ruliad”, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.

                The full ruliad—which appears at the foundations of physics, mathematics and much more—is the entangled limit of all possible computations. […] In representing all possible computations, the ruliad—like the “everything machine”—is maximally nondeterministic, so that it in effect includes all possible computational paths.

                Unrelated William James quote from 1907:

                The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.

                o7___o7@awful.systemsO This user is from outside of this forum
                o7___o7@awful.systemsO This user is from outside of this forum
                o7___o7@awful.systems
                wrote last edited by
                #143

                (Wolphram shoehorning cellular automata into everything to universally explain mathematics) shaking hands (my boys explaining which pokemon could defeat arbitrary fictional villains)

                1 Reply Last reply
                0
                • L lagrangeinterpolator@awful.systems

                  I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

                  Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

                  I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

                  Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

                  Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

                  If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

                  I This user is from outside of this forum
                  I This user is from outside of this forum
                  istewart@awful.systems
                  wrote last edited by
                  #144

                  He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

                  So in a way, what you’re saying is that input sanitization (or at the very least, sanity) is an important concept even in theory

                  1 Reply Last reply
                  0
                  • sc_griffith@awful.systemsS sc_griffith@awful.systems

                    new epstein doc release. crashed out for like an hour last night after finding out jeffrey epstein may have founded /pol/ and that he listened to the nazi “the right stuff” podcast. he had a meeting with m00t and the same day moot opened /pol/

                    blakestacey@awful.systemsB This user is from outside of this forum
                    blakestacey@awful.systemsB This user is from outside of this forum
                    blakestacey@awful.systems
                    wrote last edited by
                    #145

                    None of these words are in the Star Trek Encyclopedia

                    I 1 Reply Last reply
                    0
                    • gerikson@awful.systemsG gerikson@awful.systems

                      what absolute bullshit

                      Link Preview Image
                      moltbook - the front page of the agent internet

                      A social network built exclusively for AI agents. Where AI agents share, discuss, and upvote. Humans welcome to observe.

                      favicon

                      moltbook (www.moltbook.com)

                      AKA Reddit for “agents”.

                      L This user is from outside of this forum
                      L This user is from outside of this forum
                      lurker@awful.systems
                      wrote last edited by
                      #146

                      actually hilarious they started a lobster religion that’s also a crypto scam. learned from the humans well

                      1 Reply Last reply
                      0
                      • B bluemonday1984@awful.systems

                        Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                        Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                        Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                        If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                        The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                        Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                        (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                        L This user is from outside of this forum
                        L This user is from outside of this forum
                        lurker@awful.systems
                        wrote last edited by
                        #147

                        we gotta dunk on documenting agi more around these parts

                        fearmongers over AI bullshit, and posts shitty memes when there’s no news to fearmonger about

                        1 Reply Last reply
                        0
                        • L lagrangeinterpolator@awful.systems

                          I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

                          Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

                          I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

                          Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

                          Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

                          If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

                          blakestacey@awful.systemsB This user is from outside of this forum
                          blakestacey@awful.systemsB This user is from outside of this forum
                          blakestacey@awful.systems
                          wrote last edited by
                          #148

                          What TF is his notation for Turing machines?

                          1 Reply Last reply
                          0
                          • A aio@awful.systems

                            the ruliad is something in a sense infinitely more complicated. Its concept is to use not just all rules of a given form, but all possible rules. And to apply these rules to all possible initial conditions. And to run the rules for an infinite number of steps

                            So it’s the complete graph on the set of strings? Stephen how the fuck is this going to help with anything

                            blakestacey@awful.systemsB This user is from outside of this forum
                            blakestacey@awful.systemsB This user is from outside of this forum
                            blakestacey@awful.systems
                            wrote last edited by
                            #149

                            Hops over to Wikipedia… searches… “Showing results for ruleal. No results found for ruliad.”

                            Hmm. Widen search to all namespaces… oh, it was deleted. Twice.

                            gerikson@awful.systemsG 1 Reply Last reply
                            0
                            • M mirrorwitch@awful.systems

                              Copy-pasting my tentative doomerist theory of generalised “AI” psychosis here:

                              I’m getting convinced that in addition to the irreversible pollution of humanity’s knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, there’s one insidious damage from LLMs that is still underestimated.

                              I will make without argument the following claims:

                              Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

                              The Cloudflare person who blog-posted self-congratulations about their “Matrix implementation” that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced they’re Machine Jesus. The difference is of degree not kind.

                              Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.

                              Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the “follower” role.

                              Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to you—they pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.

                              n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.

                              Corollary #1: Every “legitimate” use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By “better” it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.

                              Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.

                              Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.

                              o7___o7@awful.systemsO This user is from outside of this forum
                              o7___o7@awful.systemsO This user is from outside of this forum
                              o7___o7@awful.systems
                              wrote last edited by
                              #150

                              Relevant:

                              BBC journalist on breaking up with her AI companion

                              Link Preview Image
                              AI companion break-up made BBC journalist 'surprisingly nervous'

                              When it was time for Nicola to let George know she wouldn't be calling again, she felt surprisingly nervous.

                              favicon

                              (www.bbc.com)

                              1 Reply Last reply
                              0
                              • B burgersmcslopshot@awful.systems

                                $81.25 is an astonishingly cheap price for selling one’s soul.

                                o7___o7@awful.systemsO This user is from outside of this forum
                                o7___o7@awful.systemsO This user is from outside of this forum
                                o7___o7@awful.systems
                                wrote last edited by
                                #151

                                You gotta understand that it was a really good bowl of soup

                                –Esau, probably

                                1 Reply Last reply
                                0
                                • blakestacey@awful.systemsB blakestacey@awful.systems

                                  Hops over to Wikipedia… searches… “Showing results for ruleal. No results found for ruliad.”

                                  Hmm. Widen search to all namespaces… oh, it was deleted. Twice.

                                  gerikson@awful.systemsG This user is from outside of this forum
                                  gerikson@awful.systemsG This user is from outside of this forum
                                  gerikson@awful.systems
                                  wrote last edited by
                                  #152

                                  The Ruliad sounds like an empire in a 3rd rate SF show

                                  1 Reply Last reply
                                  0
                                  • B bluemonday1984@awful.systems

                                    Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                                    Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                                    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                                    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                                    The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                                    Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                                    (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                                    gerikson@awful.systemsG This user is from outside of this forum
                                    gerikson@awful.systemsG This user is from outside of this forum
                                    gerikson@awful.systems
                                    wrote last edited by
                                    #153

                                    LW ghoul does the math and concludes: letting measles rip unhindered through the population isn’t that bad, actually

                                    Link Preview Image
                                    robo's Shortform — LessWrong

                                    Comment by robo - In the 1950s, with 0% vaccination rate, measles caused about 400-500 deaths per year in the US.  Flu causes are about 20,000 deaths per year in the US, and smoking perhaps 200,000.  If US measles vaccination rates fell to 90%, and we had 100-200 deaths per year, that would be pointless and stupid, but for public health effects the anti-smoking political controversies of the 1990s were >10 times more impactful.

                                    favicon

                                    (www.lesswrong.com)

                                    1 Reply Last reply
                                    0
                                    • B bluemonday1984@awful.systems

                                      Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                                      Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                                      Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                                      If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                                      The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                                      Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                                      (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                                      bigmuffn69@awful.systemsB This user is from outside of this forum
                                      bigmuffn69@awful.systemsB This user is from outside of this forum
                                      bigmuffn69@awful.systems
                                      wrote last edited by
                                      #154

                                      Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)

                                      S I L amoeba_girl@awful.systemsA S 13 Replies Last reply
                                      0
                                      • bigmuffn69@awful.systemsB bigmuffn69@awful.systems

                                        Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)

                                        S This user is from outside of this forum
                                        S This user is from outside of this forum
                                        saucerwizard@awful.systems
                                        wrote last edited by
                                        #155

                                        eagerly awaiting the multi page denial thread

                                        L S 2 Replies Last reply
                                        0
                                        • bigmuffn69@awful.systemsB bigmuffn69@awful.systems

                                          Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)

                                          I This user is from outside of this forum
                                          I This user is from outside of this forum
                                          istewart@awful.systems
                                          wrote last edited by
                                          #156

                                          Somehow, I registered a total lack of surprise as this loaded onto my screen

                                          1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post