Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. TechTakes
  3. Stubsack: weekly thread for sneers not worth an entire post, week ending 1st February 2026

Stubsack: weekly thread for sneers not worth an entire post, week ending 1st February 2026

Scheduled Pinned Locked Moved TechTakes
techtakes
209 Posts 47 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • v0ldek@awful.systemsV v0ldek@awful.systems

    I had to do a triple take on that “won’t deem anyone inferior” like what the fuck are you talking about. The core of conservatism is the belief in rigid hierarchies! Hierarchies have superiors and inferiors by definition!

    F This user is from outside of this forum
    F This user is from outside of this forum
    fiat_lux@lemmy.world
    wrote last edited by
    #85

    That depends on if you consider the “inferior” to be human, if they’re even still alive after the eugenics part.

    1 Reply Last reply
    0
    • B bluemonday1984@awful.systems

      Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

      Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

      Any awful.systems sub may be subsneered in this subthread, techtakes or no.

      If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

      The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

      Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

      (Credit and/or blame to David Gerard for starting this. What a year, huh?)

      B This user is from outside of this forum
      B This user is from outside of this forum
      bluemonday1984@awful.systems
      wrote last edited by
      #86

      New blogpost from Drew DeVault, titled “The cults of TDD and GenAI”. As the title suggests, its drawing comparisons between how people go all-in on TDD (test-driven development) and how people go all-in on slop machines.

      Its another post in the genre of “why did tech fall for AI so hard” that I’ve seen cropping up, in the same vein as mhoye’s Mastodon thread and Iris Meredith’s “The problem is culture”.

      J 1 Reply Last reply
      0
      • B bluemonday1984@awful.systems

        Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

        Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

        Any awful.systems sub may be subsneered in this subthread, techtakes or no.

        If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

        The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

        Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

        (Credit and/or blame to David Gerard for starting this. What a year, huh?)

        M This user is from outside of this forum
        M This user is from outside of this forum
        mirrorwitch@awful.systems
        wrote last edited by
        #87

        OT: today the respiratory illness I’ve had for five days tested positive for Covid the first time just now.

        My symptoms are fairly mild, probably because I reinforced my vaccine three months ago. But I’m trying to learn more about these recent “swallowing razors” variants and dang! the online situation is bad. Finding reliable medical information in the post-slop, post-Trump Internet is a nightmare.

        S nightsky@awful.systemsN 2 Replies Last reply
        0
        • B bluemonday1984@awful.systems

          Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

          Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

          Any awful.systems sub may be subsneered in this subthread, techtakes or no.

          If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

          The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

          Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

          (Credit and/or blame to David Gerard for starting this. What a year, huh?)

          rook@awful.systemsR This user is from outside of this forum
          rook@awful.systemsR This user is from outside of this forum
          rook@awful.systems
          wrote last edited by
          #88

          Just seen a clip of aronofsky’s genai revolutionary war thing and it is incredibly bad. Just… every detail is shit. Ways in which I hadn’t previously imagined that the uncanny valley would intrude. Even if it weren’t for the simulated flesh golems, one of whom seems to be wearing anthony hopkins’ skin as a clumsy disguise, the framing and pacing just feels like the model was trained on endless adverts and corporate speaking head videos, and either it was impossible to edit, or none the crew have any idea what even mediocre films look like.

          I also hadn’t appreciated before that genai lip sync/dubbing was just embarrassing. I think I’ve only seen a couple of very short genai video clips before, and the most recent at least 6 months ago, but this just seems straight up broken. Have the people funding this stuff ever looked at what is being generated?

          Link Preview Image
          AmericanTruckSongs10 (@ethangach.bsky.social)

          You don't have the be an avowed AI hater to be impressed without lifeless and unwatchable this is.

          favicon

          Bluesky Social (bsky.app)

          blakestacey@awful.systemsB self@awful.systemsS 2 Replies Last reply
          0
          • B bluemonday1984@awful.systems

            Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

            Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

            Any awful.systems sub may be subsneered in this subthread, techtakes or no.

            If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

            The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

            Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

            (Credit and/or blame to David Gerard for starting this. What a year, huh?)

            S This user is from outside of this forum
            S This user is from outside of this forum
            soyweiser@awful.systems
            wrote last edited by
            #89

            Finally read Greg Egan - Permutation City, and looked at the LW discussion on the book, and I feel like they missed a lot of things (and why are they talking about the ‘dust theory’ like it is real). Oof, the wikipedia page on the themes and settings has a similar problem however. It is like they all ignored the second half of the book and the themes contained within it, not one mention of the city being called Elysium for example.

            And of course Zack_M_Davis thinks the plotline of the guy who killed the drug dealer (not a sex worker/prostitute lw people, please the text makes this pretty clear) he was in a lowkey romantic relationship with should have been cut (also not mentioned on the wikipedia page iirc).

            Also, nobody seems to talk about the broken sewage pipe.

            S 1 Reply Last reply
            0
            • rook@awful.systemsR rook@awful.systems

              Just seen a clip of aronofsky’s genai revolutionary war thing and it is incredibly bad. Just… every detail is shit. Ways in which I hadn’t previously imagined that the uncanny valley would intrude. Even if it weren’t for the simulated flesh golems, one of whom seems to be wearing anthony hopkins’ skin as a clumsy disguise, the framing and pacing just feels like the model was trained on endless adverts and corporate speaking head videos, and either it was impossible to edit, or none the crew have any idea what even mediocre films look like.

              I also hadn’t appreciated before that genai lip sync/dubbing was just embarrassing. I think I’ve only seen a couple of very short genai video clips before, and the most recent at least 6 months ago, but this just seems straight up broken. Have the people funding this stuff ever looked at what is being generated?

              Link Preview Image
              AmericanTruckSongs10 (@ethangach.bsky.social)

              You don't have the be an avowed AI hater to be impressed without lifeless and unwatchable this is.

              favicon

              Bluesky Social (bsky.app)

              blakestacey@awful.systemsB This user is from outside of this forum
              blakestacey@awful.systemsB This user is from outside of this forum
              blakestacey@awful.systems
              wrote last edited by
              #90

              Mateusz Fafinski (‪@calthalas.bsky.social‬):

              Happy to see that there is no need to worry about the historical accuracy of new 1776 AI slop because it happens in the mystical land of Λamereedd.

              1 Reply Last reply
              0
              • B bluemonday1984@awful.systems

                Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                rook@awful.systemsR This user is from outside of this forum
                rook@awful.systemsR This user is from outside of this forum
                rook@awful.systems
                wrote last edited by
                #91

                Amazon Found ‘High Volume’ Of Child Sex Abuse Material in AI Training Data

                The tech giant reported hundreds of thousands of cases of suspected child sexual abuse material, but won’t say where it came from

                I’ll bet.

                Bloomberg - Are you a robot?

                favicon

                (www.bloomberg.com)

                S 1 Reply Last reply
                0
                • M mirrorwitch@awful.systems

                  OT: today the respiratory illness I’ve had for five days tested positive for Covid the first time just now.

                  My symptoms are fairly mild, probably because I reinforced my vaccine three months ago. But I’m trying to learn more about these recent “swallowing razors” variants and dang! the online situation is bad. Finding reliable medical information in the post-slop, post-Trump Internet is a nightmare.

                  S This user is from outside of this forum
                  S This user is from outside of this forum
                  soyweiser@awful.systems
                  wrote last edited by
                  #92

                  Get well soon, yeah, I follow some people on bsky who have different opinions on Covid, and even there it is bad. (Doesn’t help than the one real expert I follow who mentions that ‘Covid is over’ (to massively oversimplify his points) treats any concern that it isn’t over as if people are antivaxxers and anti science, I don’t want to get involved, but several times I was close to replying ‘you are quite strawmanning their position and reading things into what they are saying which they are not’ but that would just get me blocked/called a covid truther or something so why bother).

                  1 Reply Last reply
                  0
                  • sc_griffith@awful.systemsS sc_griffith@awful.systems

                    i actually find the content pretty amusing, since it amounts to “have you guys tried using words correctly every once in a while?”

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    soyweiser@awful.systems
                    wrote last edited by
                    #93

                    Yep, I have not paid attention to them using these words correctly, but considering how they treat the other rules of Rationalism more as guidelines anyway, I’m gonna guess no.

                    But compared to an actual ‘here is yall speaking like evil robots, stop that’ article it is lacking. Which is good, as that is our territory. ;).

                    1 Reply Last reply
                    0
                    • B bluemonday1984@awful.systems

                      Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                      Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                      Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                      If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                      The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                      Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                      (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                      M This user is from outside of this forum
                      M This user is from outside of this forum
                      mirrorwitch@awful.systems
                      wrote last edited by
                      #94

                      Copy-pasting my tentative doomerist theory of generalised “AI” psychosis here:

                      I’m getting convinced that in addition to the irreversible pollution of humanity’s knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, there’s one insidious damage from LLMs that is still underestimated.

                      I will make without argument the following claims:

                      Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

                      The Cloudflare person who blog-posted self-congratulations about their “Matrix implementation” that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced they’re Machine Jesus. The difference is of degree not kind.

                      Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.

                      Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the “follower” role.

                      Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to you—they pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.

                      n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.

                      Corollary #1: Every “legitimate” use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By “better” it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.

                      Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.

                      Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.

                      nightsky@awful.systemsN o7___o7@awful.systemsO 2 Replies Last reply
                      0
                      • M mirrorwitch@awful.systems

                        OT: today the respiratory illness I’ve had for five days tested positive for Covid the first time just now.

                        My symptoms are fairly mild, probably because I reinforced my vaccine three months ago. But I’m trying to learn more about these recent “swallowing razors” variants and dang! the online situation is bad. Finding reliable medical information in the post-slop, post-Trump Internet is a nightmare.

                        nightsky@awful.systemsN This user is from outside of this forum
                        nightsky@awful.systemsN This user is from outside of this forum
                        nightsky@awful.systems
                        wrote last edited by
                        #95

                        Have a quick recovery! It sucks that society has collectively given up on trying to mitigate its spread.

                        1 Reply Last reply
                        0
                        • M mirrorwitch@awful.systems

                          Copy-pasting my tentative doomerist theory of generalised “AI” psychosis here:

                          I’m getting convinced that in addition to the irreversible pollution of humanity’s knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, there’s one insidious damage from LLMs that is still underestimated.

                          I will make without argument the following claims:

                          Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

                          The Cloudflare person who blog-posted self-congratulations about their “Matrix implementation” that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced they’re Machine Jesus. The difference is of degree not kind.

                          Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.

                          Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the “follower” role.

                          Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to you—they pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.

                          n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.

                          Corollary #1: Every “legitimate” use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By “better” it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.

                          Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.

                          Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.

                          nightsky@awful.systemsN This user is from outside of this forum
                          nightsky@awful.systemsN This user is from outside of this forum
                          nightsky@awful.systems
                          wrote last edited by
                          #96

                          Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

                          I wouldn’t go as far as using the “AI psychosis” term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.

                          I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I don’t believe that’s a weakness of human psychology, but that it’s what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, I’m sure it’s not good. Time for me to be a total dork and cite an anime quote on human interaction: “I create them as they create me” – except that with LLMs, it actually goes only in one direction… the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.

                          while atrophying empathy

                          This possibility is to me actually the scariest part of your post.

                          M 1 Reply Last reply
                          0
                          • rook@awful.systemsR rook@awful.systems

                            Amazon Found ‘High Volume’ Of Child Sex Abuse Material in AI Training Data

                            The tech giant reported hundreds of thousands of cases of suspected child sexual abuse material, but won’t say where it came from

                            I’ll bet.

                            Bloomberg - Are you a robot?

                            favicon

                            (www.bloomberg.com)

                            S This user is from outside of this forum
                            S This user is from outside of this forum
                            soyweiser@awful.systems
                            wrote last edited by
                            #97

                            A long time ago I heard a story of some guy arrested here in .nl and they also got his porn collection (which was big at the time, several gigs of images! (to give an indication of how long ago it was)), and it also included some child porn, not because he was hunting for that, but just because he had downloaded everything he could find and some of it were csam images.

                            What this makes me wonder about is if it is something like this, and if it is, just how much porn did they ingest? I know people have mentioned these things can recreate marvel movies without problems, but would they also be able to recreate whole porn movies? Has anybody tested that?

                            rook@awful.systemsR 1 Reply Last reply
                            0
                            • nightsky@awful.systemsN nightsky@awful.systems

                              Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

                              I wouldn’t go as far as using the “AI psychosis” term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.

                              I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I don’t believe that’s a weakness of human psychology, but that it’s what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, I’m sure it’s not good. Time for me to be a total dork and cite an anime quote on human interaction: “I create them as they create me” – except that with LLMs, it actually goes only in one direction… the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.

                              while atrophying empathy

                              This possibility is to me actually the scariest part of your post.

                              M This user is from outside of this forum
                              M This user is from outside of this forum
                              mirrorwitch@awful.systems
                              wrote last edited by
                              #98

                              I don’t mean the term “psychosis” as a depreciative, I mean in the clinical sense of forming a model of the world that deviates from consensus reality, and like, getting really into it.

                              For example, the person who posted the Matrix non-code really believed they had implemented the protocol, even though for everyone else it was patently obvious the code wasn’t there. That vibe-coded browser didn’t even compile, but they also were living in a reality where they made a browser. The German botanics professor thought it was a perfectly normal thing to admit in public that his entire academic output for the past 2 years was autogenerated, including his handling of student data. And it’s by now a documented phenomenon how programmers think they’re being more productive with LLM assistants, but when you try to measure the productivity, it evaporates.

                              These psychoses are, admittely, much milder and less damaging than the Omega Jesus desert UFO suicide case. But they’re delusions nonetheless, and moreover they’re caused by the same mechanism, viz. the chatbot happily doubling down on everything you say—which means at any moment the “mild” psychoses, too, may end up into a feedback loop that escalates them to dangerous places.

                              That is, I’m claiming LLMs have a serious issue with hallucinations, and I’m not talking about the LLM hallucinating.


                              Notice that this claim is quite independent of the fact that LLMs have no real understanding or human-like cognition, or that they necessarily produce errors and can’t be trusted, or that these errors happen to be, by design, the hardest possible type of error to detect—signal-shaped noise. These problems are bad, sure. But the thing where people hooked on LLMs inflate delusions about what the LLM is even actually doing for them—that seems to me an entirely separate mechanism; something that happens when a person has a syntactically very human-like conversation partner that is a perfect slave, always available, always willing to do whatever you want, always zero pushback, who engages into a crack-cocaine version of brownosing. That’s why I compare it to cult dynamics—the kind of group psychosis in a cult isn’t a product of the leader’s delusions alone, there’s a way that the followers vicariously power trip along with their guru and constantly inflate his ego to chase the next hit together.

                              It is conceivable to me that someone could make a neutral-toned chatbot programmed to never 100% agree with the user and it wouldn’t generate these psychotic effects. Only no company will do that because these things are really expensive to run and they’re already bleeding money, they need every trick in the book to get users to stay hooked. But I think nobody in the world had predicted just how badly one can trip when you have “dr. flattery the alwayswrong bot” constantly telling you what a genius you are.

                              1 Reply Last reply
                              0
                              • B bluemonday1984@awful.systems

                                Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                                Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                                Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                                If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                                The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                                Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                                (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                                self@awful.systemsS This user is from outside of this forum
                                self@awful.systemsS This user is from outside of this forum
                                self@awful.systems
                                wrote last edited by
                                #99

                                I just made a quick change to our lemmy-ui config to hopefully quickly restart it when it once again leaks all of its memory and gets stuck in a tight set of GC cycles due to a “high” amount of traffic (normal for a public website being scraped, even with iocaine) even though it’s a node application that barely does anything at all

                                I can’t monitor this one as close as I’d like today so if something breaks and it doesn’t resolve itself within a couple minutes, or the instance looks like it’s in a crash loop, ping [object Object] on mastodon and I’ll try and get to it as soon as I can

                                1 Reply Last reply
                                0
                                • B bluemonday1984@awful.systems

                                  Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                                  Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                                  Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                                  If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                                  The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                                  Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                                  (Credit and/or blame to David Gerard for starting this. What a year, huh?)

                                  blakestacey@awful.systemsB This user is from outside of this forum
                                  blakestacey@awful.systemsB This user is from outside of this forum
                                  blakestacey@awful.systems
                                  wrote last edited by
                                  #100

                                  Jeff Sharlet (@jeffsharlet.bsky.social):

                                  The college at which I’m employed, which has signed a contract with the AI firm that stole books from 131 colleagues & me, paid a student to write an op-ed for the student paper promoting AI, guided the writing of it, and did not disclose this to the paper. […] the student says while the college coached him to write the oped, he was paid by the AI project, which is connected with the college. The student paper’s position is that the college paid him. And there’s no question that college attempted to place a pro-AI op-ed.

                                  Link Preview Image
                                  College approached and paid student to write op-ed in The Dartmouth

                                  The Dartmouth ran the article on Nov. 17 without knowledge that the College had been involved. 

                                  favicon

                                  College approached and paid student to write op-ed in The Dartmouth - The Dartmouth (www.thedartmouth.com)

                                  B 1 Reply Last reply
                                  0
                                  • S soyweiser@awful.systems

                                    Finally read Greg Egan - Permutation City, and looked at the LW discussion on the book, and I feel like they missed a lot of things (and why are they talking about the ‘dust theory’ like it is real). Oof, the wikipedia page on the themes and settings has a similar problem however. It is like they all ignored the second half of the book and the themes contained within it, not one mention of the city being called Elysium for example.

                                    And of course Zack_M_Davis thinks the plotline of the guy who killed the drug dealer (not a sex worker/prostitute lw people, please the text makes this pretty clear) he was in a lowkey romantic relationship with should have been cut (also not mentioned on the wikipedia page iirc).

                                    Also, nobody seems to talk about the broken sewage pipe.

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    saucerwizard@awful.systems
                                    wrote last edited by
                                    #101

                                    these people are just mind numbingly lame.

                                    1 Reply Last reply
                                    0
                                    • rook@awful.systemsR rook@awful.systems

                                      Just seen a clip of aronofsky’s genai revolutionary war thing and it is incredibly bad. Just… every detail is shit. Ways in which I hadn’t previously imagined that the uncanny valley would intrude. Even if it weren’t for the simulated flesh golems, one of whom seems to be wearing anthony hopkins’ skin as a clumsy disguise, the framing and pacing just feels like the model was trained on endless adverts and corporate speaking head videos, and either it was impossible to edit, or none the crew have any idea what even mediocre films look like.

                                      I also hadn’t appreciated before that genai lip sync/dubbing was just embarrassing. I think I’ve only seen a couple of very short genai video clips before, and the most recent at least 6 months ago, but this just seems straight up broken. Have the people funding this stuff ever looked at what is being generated?

                                      Link Preview Image
                                      AmericanTruckSongs10 (@ethangach.bsky.social)

                                      You don't have the be an avowed AI hater to be impressed without lifeless and unwatchable this is.

                                      favicon

                                      Bluesky Social (bsky.app)

                                      self@awful.systemsS This user is from outside of this forum
                                      self@awful.systemsS This user is from outside of this forum
                                      self@awful.systems
                                      wrote last edited by
                                      #102

                                      aronofsky should’ve stuck with the true future of entertainment while he was ahead

                                      it somehow isn’t shocking that the aronofsky movies I liked seem to have been substantially plagiarized from Satoshi Kon’s work

                                      1 Reply Last reply
                                      0
                                      • S soyweiser@awful.systems

                                        A long time ago I heard a story of some guy arrested here in .nl and they also got his porn collection (which was big at the time, several gigs of images! (to give an indication of how long ago it was)), and it also included some child porn, not because he was hunting for that, but just because he had downloaded everything he could find and some of it were csam images.

                                        What this makes me wonder about is if it is something like this, and if it is, just how much porn did they ingest? I know people have mentioned these things can recreate marvel movies without problems, but would they also be able to recreate whole porn movies? Has anybody tested that?

                                        rook@awful.systemsR This user is from outside of this forum
                                        rook@awful.systemsR This user is from outside of this forum
                                        rook@awful.systems
                                        wrote last edited by
                                        #103

                                        The whole thing seems extra sketchy to me, because of it coinciding with the firing of an awful lot of people. It sounds a little bit like Amazon’s hand might have been forced here, because they fired someone who knew where the skeletons were and realised this was their last chance to have any kind of control over the narrative.

                                        1 Reply Last reply
                                        0
                                        • B bluemonday1984@awful.systems

                                          New blogpost from Drew DeVault, titled “The cults of TDD and GenAI”. As the title suggests, its drawing comparisons between how people go all-in on TDD (test-driven development) and how people go all-in on slop machines.

                                          Its another post in the genre of “why did tech fall for AI so hard” that I’ve seen cropping up, in the same vein as mhoye’s Mastodon thread and Iris Meredith’s “The problem is culture”.

                                          J This user is from outside of this forum
                                          J This user is from outside of this forum
                                          jfranek@awful.systems
                                          wrote last edited by
                                          #104

                                          Dang, I want to find this article more relatable than I do. Most software I have dev experience with doesn’t have the problem of relying on automated tests too much, but the exact opposite.

                                          And while I very much write tests for the dopamine high and false sense of security green checkmarks provide, I still prefer that to the real sense of un-security of not having tests.

                                          1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post