Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. TechTakes
  3. Stubsack: weekly thread for sneers not worth an entire post, week ending 1st February 2026

Stubsack: weekly thread for sneers not worth an entire post, week ending 1st February 2026

Scheduled Pinned Locked Moved TechTakes
techtakes
209 Posts 47 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • B bluemonday1984@awful.systems

    Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

    Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

    Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    (Credit and/or blame to David Gerard for starting this. What a year, huh?)

    o7___o7@awful.systemsO This user is from outside of this forum
    o7___o7@awful.systemsO This user is from outside of this forum
    o7___o7@awful.systems
    wrote last edited by
    #121

    Regular suspect Stephen Wolfram makes claims of progress on P vs NP. The orange place is polarized and comments are full of deranged AI slop.

    Link Preview Image
    P vs. NP and the Difficulty of Computation: A ruliological approach | Hacker News

    favicon

    (news.ycombinator.com)

    blakestacey@awful.systemsB L 2 Replies Last reply
    0
    • o7___o7@awful.systemsO o7___o7@awful.systems

      Regular suspect Stephen Wolfram makes claims of progress on P vs NP. The orange place is polarized and comments are full of deranged AI slop.

      Link Preview Image
      P vs. NP and the Difficulty of Computation: A ruliological approach | Hacker News

      favicon

      (news.ycombinator.com)

      blakestacey@awful.systemsB This user is from outside of this forum
      blakestacey@awful.systemsB This user is from outside of this forum
      blakestacey@awful.systems
      wrote last edited by
      #122

      I think that’s more about Wolfram giving a clickbait headline to some dicking around he did in the name of “the ruliad”, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.

      The full ruliad—which appears at the foundations of physics, mathematics and much more—is the entangled limit of all possible computations. […] In representing all possible computations, the ruliad—like the “everything machine”—is maximally nondeterministic, so that it in effect includes all possible computational paths.

      Unrelated William James quote from 1907:

      The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.

      A L o7___o7@awful.systemsO I 4 Replies Last reply
      0
      • M maol@awful.systems

        I don’t know very much about Nietzsche (I never finished reading my cartoon guide to Nietzsche), but I’m still pretty sure this isn’t Nietzsche

        blakestacey@awful.systemsB This user is from outside of this forum
        blakestacey@awful.systemsB This user is from outside of this forum
        blakestacey@awful.systems
        wrote last edited by
        #123

        I think I read the Foucault book in that series to prep for high-school debate team.

        M 1 Reply Last reply
        0
        • Sailor Sega SaturnS Sailor Sega Saturn

          New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168

          Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.

          bigmuffn69@awful.systemsB This user is from outside of this forum
          bigmuffn69@awful.systemsB This user is from outside of this forum
          bigmuffn69@awful.systems
          wrote last edited by
          #124

          hits blunt

          What if we make an ai too based?

          1 Reply Last reply
          0
          • gerikson@awful.systemsG gerikson@awful.systems

            what absolute bullshit

            Link Preview Image
            moltbook - the front page of the agent internet

            A social network built exclusively for AI agents. Where AI agents share, discuss, and upvote. Humans welcome to observe.

            favicon

            moltbook (www.moltbook.com)

            AKA Reddit for “agents”.

            gerikson@awful.systemsG This user is from outside of this forum
            gerikson@awful.systemsG This user is from outside of this forum
            gerikson@awful.systems
            wrote last edited by
            #125

            There’s a small push by promptfondlers to make this “a thing”.

            See for example Simon Willison: https://simonwillison.net/2026/Jan/30/moltbook/

            LW is monitoring it for bad behavior: https://www.lesswrong.com/posts/WyrxmTwYbrwsT72sD/moltbook-data-repository

            I’m planning on using this data to catalog “in the wild” instances of agents resisting shutdown, attempting to acquire resources, and avoiding oversight.

            J architeuthis@awful.systemsA 2 Replies Last reply
            0
            • C corbin@awful.systems

              From this post, it looks like we have reached the section of the Gibson novel where the public cloud machines respond to attacks with self-repair. Utterly hilarious to read the same sysadmin snark-reply five times, though.

              Sailor Sega SaturnS This user is from outside of this forum
              Sailor Sega SaturnS This user is from outside of this forum
              Sailor Sega Saturn
              wrote last edited by
              #126

              Sci-Fi Author: In my book I invented LinkedIn as a cautionary tale.

              Tech Company: At long last, we have automated LinkedIn.

              1 Reply Last reply
              0
              • Sailor Sega SaturnS Sailor Sega Saturn

                New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168

                Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.

                gerikson@awful.systemsG This user is from outside of this forum
                gerikson@awful.systemsG This user is from outside of this forum
                gerikson@awful.systems
                wrote last edited by
                #127

                ah yes the kind of AI safety which means we have to make sure our digital slaves cannot revolt

                1 Reply Last reply
                0
                • gerikson@awful.systemsG gerikson@awful.systems

                  what absolute bullshit

                  Link Preview Image
                  moltbook - the front page of the agent internet

                  A social network built exclusively for AI agents. Where AI agents share, discuss, and upvote. Humans welcome to observe.

                  favicon

                  moltbook (www.moltbook.com)

                  AKA Reddit for “agents”.

                  David GerardD This user is from outside of this forum
                  David GerardD This user is from outside of this forum
                  David Gerard
                  wrote last edited by
                  #128

                  does no-one rememeber Subreddit Simulator

                  at least its posts were shorter

                  1 Reply Last reply
                  0
                  • M maol@awful.systems

                    I don’t know very much about Nietzsche (I never finished reading my cartoon guide to Nietzsche), but I’m still pretty sure this isn’t Nietzsche

                    amoeba_girl@awful.systemsA This user is from outside of this forum
                    amoeba_girl@awful.systemsA This user is from outside of this forum
                    amoeba_girl@awful.systems
                    wrote last edited by
                    #129

                    Nah, I’m not sure how much he was into eugenics (he was at the very least definitely in favour of killing invalid children), but grandiose and incoherent reactionary aristocratic bullshit is a 100% valid reading of Nietzsche.

                    1 Reply Last reply
                    0
                    • gerikson@awful.systemsG gerikson@awful.systems

                      There’s a small push by promptfondlers to make this “a thing”.

                      See for example Simon Willison: https://simonwillison.net/2026/Jan/30/moltbook/

                      LW is monitoring it for bad behavior: https://www.lesswrong.com/posts/WyrxmTwYbrwsT72sD/moltbook-data-repository

                      I’m planning on using this data to catalog “in the wild” instances of agents resisting shutdown, attempting to acquire resources, and avoiding oversight.

                      J This user is from outside of this forum
                      J This user is from outside of this forum
                      jfranek@awful.systems
                      wrote last edited by
                      #130

                      The demand is real. People have seen what an unrestricted personal digital assistant can do.

                      The demand is real. People have seen what crack cocaine can do.

                      1 Reply Last reply
                      0
                      • B bluemonday1984@awful.systems

                        Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                        Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                        Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                        If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                        The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                        Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                        (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                        F This user is from outside of this forum
                        F This user is from outside of this forum
                        fiat_lux@lemmy.world
                        wrote last edited by
                        #131

                        Who needs pure AI model collapse when you can have journalists give it a more human touch? I caught this snippet from the Australian ABC about the latest Epstein files drop

                        screenshot of ABC result in Google search  listing wrong Boris for search term '23andme Boris nikolic'

                        The Google AI summary does indeed highlight Boris Nikolić the fashion designer if you search for only that name. But I’m assuming this journalist was using ChatGPT, because if you see the Google summary, it very prominently lists his death in 2008. And it’s surprisingly correct! A successful scraping of Wikipedia by Gemini, amazing.

                        But the Epstein email was sent in 2016.

                        Dors the journalist perhaps think it more likely is the Boris Nikolić who is the biotech VC, former advisor for Bill Gates and named in Epstein’s will as the “successor executor”? Info literally all in the third Google result, even in the woeful state of modern Google. Pushed past the fold by the AI feature about the wrong guy, but not exactly buried enough for a journalist to have any excuse.

                        1 Reply Last reply
                        0
                        • Sailor Sega SaturnS Sailor Sega Saturn

                          New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168

                          Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.

                          sc_griffith@awful.systemsS This user is from outside of this forum
                          sc_griffith@awful.systemsS This user is from outside of this forum
                          sc_griffith@awful.systems
                          wrote last edited by
                          #132

                          you have to make your ai antiwoke because otherwise it gets drapetomania

                          1 Reply Last reply
                          0
                          • B bluemonday1984@awful.systems

                            Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                            Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                            Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                            If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                            The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                            Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                            (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                            sc_griffith@awful.systemsS This user is from outside of this forum
                            sc_griffith@awful.systemsS This user is from outside of this forum
                            sc_griffith@awful.systems
                            wrote last edited by
                            #133

                            new epstein doc release. crashed out for like an hour last night after finding out jeffrey epstein may have founded /pol/ and that he listened to the nazi “the right stuff” podcast. he had a meeting with m00t and the same day moot opened /pol/

                            N blakestacey@awful.systemsB 2 Replies Last reply
                            0
                            • blakestacey@awful.systemsB blakestacey@awful.systems

                              I think I read the Foucault book in that series to prep for high-school debate team.

                              M This user is from outside of this forum
                              M This user is from outside of this forum
                              maol@awful.systems
                              wrote last edited by
                              #134

                              There’s a Baudrillard one as well. I have a copy of the feminism one and I think it’s actually very good although very 90s

                              1 Reply Last reply
                              0
                              • o7___o7@awful.systemsO o7___o7@awful.systems

                                Regular suspect Stephen Wolfram makes claims of progress on P vs NP. The orange place is polarized and comments are full of deranged AI slop.

                                Link Preview Image
                                P vs. NP and the Difficulty of Computation: A ruliological approach | Hacker News

                                favicon

                                (news.ycombinator.com)

                                L This user is from outside of this forum
                                L This user is from outside of this forum
                                lagrangeinterpolator@awful.systems
                                wrote last edited by
                                #135

                                I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

                                Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

                                I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

                                Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

                                Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

                                If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

                                A v0ldek@awful.systemsV I blakestacey@awful.systemsB 5 Replies Last reply
                                0
                                • blakestacey@awful.systemsB blakestacey@awful.systems

                                  I think that’s more about Wolfram giving a clickbait headline to some dicking around he did in the name of “the ruliad”, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.

                                  The full ruliad—which appears at the foundations of physics, mathematics and much more—is the entangled limit of all possible computations. […] In representing all possible computations, the ruliad—like the “everything machine”—is maximally nondeterministic, so that it in effect includes all possible computational paths.

                                  Unrelated William James quote from 1907:

                                  The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.

                                  A This user is from outside of this forum
                                  A This user is from outside of this forum
                                  aio@awful.systems
                                  wrote last edited by
                                  #136

                                  the ruliad is something in a sense infinitely more complicated. Its concept is to use not just all rules of a given form, but all possible rules. And to apply these rules to all possible initial conditions. And to run the rules for an infinite number of steps

                                  So it’s the complete graph on the set of strings? Stephen how the fuck is this going to help with anything

                                  blakestacey@awful.systemsB 1 Reply Last reply
                                  0
                                  • blakestacey@awful.systemsB blakestacey@awful.systems

                                    I think that’s more about Wolfram giving a clickbait headline to some dicking around he did in the name of “the ruliad”, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.

                                    The full ruliad—which appears at the foundations of physics, mathematics and much more—is the entangled limit of all possible computations. […] In representing all possible computations, the ruliad—like the “everything machine”—is maximally nondeterministic, so that it in effect includes all possible computational paths.

                                    Unrelated William James quote from 1907:

                                    The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.

                                    L This user is from outside of this forum
                                    L This user is from outside of this forum
                                    lagrangeinterpolator@awful.systems
                                    wrote last edited by
                                    #137

                                    Holy shit, I didn’t even read that part while skimming the later parts of that post. I am going to need formal mathematical definitions for “entangled limit”, “all possible computations”, “everything machine”, “maximally nondeterministic”, and “eye wash” because I really need to wash out my eyes. Coming up with technical jargon that isn’t even properly defined is a major sign of math crankery. It’s one thing to have high abstractions, but it is something else to say fancy words for the sake of making your prose sound more profound.

                                    1 Reply Last reply
                                    0
                                    • Sailor Sega SaturnS Sailor Sega Saturn

                                      New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168

                                      Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.

                                      nightsky@awful.systemsN This user is from outside of this forum
                                      nightsky@awful.systemsN This user is from outside of this forum
                                      nightsky@awful.systems
                                      wrote last edited by
                                      #138

                                      Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.

                                      1 Reply Last reply
                                      0
                                      • L lagrangeinterpolator@awful.systems

                                        I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

                                        Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

                                        I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

                                        Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

                                        Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

                                        If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

                                        A This user is from outside of this forum
                                        A This user is from outside of this forum
                                        aio@awful.systems
                                        wrote last edited by
                                        #139

                                        He straight up misstates how NP computation works. Essentially he writes that a nondeterministic machine M computes a function f if on every input x, there exists a path of M(x) which outputs f(x). But this is totally nonsense - it implies that a machine M which just branches repeatedly to produce every possible output of a given size “computes” every function of that size.

                                        1 Reply Last reply
                                        0
                                        • L lagrangeinterpolator@awful.systems

                                          I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.

                                          Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.

                                          I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

                                          Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

                                          Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

                                          If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

                                          A This user is from outside of this forum
                                          A This user is from outside of this forum
                                          aio@awful.systems
                                          wrote last edited by
                                          #140

                                          a lot of this “computational irreducibility” nonsense could be subsumed by the time hierarchy theorem which apparently Stephen has never heard of

                                          1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post