Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. TechTakes
  3. Stubsack: weekly thread for sneers not worth an entire post, week ending 22nd February 2026

Stubsack: weekly thread for sneers not worth an entire post, week ending 22nd February 2026

Scheduled Pinned Locked Moved TechTakes
techtakes
129 Posts 34 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L lurker@awful.systems

    this article that tries to argue that we have managed to achieve AGI already is a real hoot

    self@awful.systemsS This user is from outside of this forum
    self@awful.systemsS This user is from outside of this forum
    self@awful.systems
    wrote last edited by
    #82

    In March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI in San Francisco, California, was judged by humans in a Turing test to be human 73% of the time — more often than actual humans were. Moreover, readers even preferred literary texts generated by LLMs over those written by human experts.

    do you know how hard it is to write something that aged poorly months before it was written? it’s in the public consciousness that LLMs write like absolute shit in ways that are very easy to pick out once you’ve been forced to read a bunch of LLM-extruded text. inb4 some asshole with AI psychosis pulls out “technically ChatGPT’s more human than you are, look at the statistics” regarding the 73% figure I guess. but you know when statistics don’t count!

    A March 2025 survey by the Association for the Advancement of Artificial Intelligence in Washington DC found that 76% of leading researchers thought that scaling up current AI approaches would be ‘unlikely’ or ‘very unlikely’ to yield AGI

    […] What explains this disconnect? We suggest that the problem is part conceptual, because definitions of AGI are ambiguous and inconsistent; part emotional, because AGI raises fear of displacement and disruption; and part practical, as the term is entangled with commercial interests that can distort assessments.

    no you see it’s the leading researchers that are wrong. why are you being so emotional over AGI. we surveyed Some Assholes and they were pretty sure GPT was a human and you were a bot so… so there!

    1 Reply Last reply
    0
    • B bluemonday1984@awful.systems

      Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

      Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

      Any awful.systems sub may be subsneered in this subthread, techtakes or no.

      If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

      The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

      Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

      (Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)

      C This user is from outside of this forum
      C This user is from outside of this forum
      cinnasverses@awful.systems
      wrote last edited by
      #83

      Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled “Quis cancellat ipsos cancellores?” which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with “Persephone.” He or she does not quite say that any of the accusations were untrue, just that “an anonymous, unverified report” says that some details were changed by an editor, and that her Medium post was of “dramatically lower fidelity, but higher memetic virulence” than Brent’s buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15 and he was ~27). The poster accuses Aella of using substances and BDSM games to blur the line of consent.

      The post names Joscha Bach as someone Aella tried to exclude. We recently talked abut Bach’s attempt to get Jeffrey Epstein to fund an event where our friends would speak.

      Often, people in messed-up situations point at a very similar situation and say “at least we are not like that.” I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just. Whether you are in to bull sessions or polyamory, there are healthy communities to explore in any medium-sized city!

      blakestacey@awful.systemsB bigmuffn69@awful.systemsB 2 Replies Last reply
      0
      • B bluemonday1984@awful.systems

        Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

        Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

        Any awful.systems sub may be subsneered in this subthread, techtakes or no.

        If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

        The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

        Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

        (Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)

        N This user is from outside of this forum
        N This user is from outside of this forum
        nfultz@awful.systems
        wrote last edited by
        #84

        Blocked

        favicon

        (old.reddit.com)

        I looked it up, and this one is credited to Glen Wexler, who is an actual artist with a pretty distinct style and yes, he’s been incorporating AI into his process lately, and I guess he did use it here (those windows on those buildings are sus as hell, and the overall sharpness of the image just screams AI).

        So it’s not outright slop, but still pretty disappointing and incongruous coming from this band. Their last two records were examining our society’s alienation through technology, at times to the point of “phone bad!” level nagging, but using the most literally destructive technology of them all is fine, as long as it helps keep the costs down, I guess?

        And it just doesn’t look good, but come to think of it, most of their albums have bad cover art, it’s almost like they do it on purpose. Love the music, though.

        It’s too bad if true, I can’t unsee it now. for reference: https://failureband.bandcamp.com/album/location-lost

        1 Reply Last reply
        0
        • sc_griffith@awful.systemsS sc_griffith@awful.systems

          maybe “parasitic innovation”?

          F This user is from outside of this forum
          F This user is from outside of this forum
          froztbyte@awful.systems
          wrote last edited by
          #85

          Something like “innovations in parasitic enclosure” may perhaps be a phrase that can give a handle on it, yeah

          1 Reply Last reply
          0
          • B bluemonday1984@awful.systems

            Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

            Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

            Any awful.systems sub may be subsneered in this subthread, techtakes or no.

            If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

            The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

            Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

            (Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)

            H This user is from outside of this forum
            H This user is from outside of this forum
            hrrrngh@awful.systems
            wrote last edited by
            #86

            context: I wanted to know if the open source projects currently being spammed with PRs would be safe from people running slop models on their computer if they weren’t able to use claude or whatever. Answer: yes, these things are still terrible

            but while I was searching I found this comment and the fact that people hated it is so funny to me. It’s literally the person who posted the thread. less thinking and words, more hype links please.

            ::: spoiler conversation https://www.reddit.com/r/LocalLLaMA/comments/1qvjonm/first_qwen3codernext_reap_is_out/o3jn5db/

            32k context? is that usable for coding?

            (OP’s response, sitting at a steady -7 points)

            LLMs are useless anyway so, okay-ish, depends on your task obviously

            If LLMs were actually capable of solving actual hard tasks, you’d want as much context as possible

            A good way to think about is that tokens compress text roughly 1:4. If you have a 4MB codebase, it would need 1M tokens theoretically.

            That’s one way to start, then we get into the more debatable stuff…

            Obviously text repeats a lot and doesn’t always encode new information each token. In fact, it’s worse than that, as adding tokens can _reduce_ information contained in text, think inserting random stuff into a string representing dna. So to estimate how much ctx you need, think how much compressed information is in your codebase. That includes stuff like decisions (which LLMs are incapable of making), domain knowledge, or even stuff like why does double click have 33ms debounce and not 3ms or 100ms in your codebase which nobody ever wrote down. So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation :::

            *emphasis added by me

            F 1 Reply Last reply
            0
            • B bluemonday1984@awful.systems

              Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

              Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

              Any awful.systems sub may be subsneered in this subthread, techtakes or no.

              If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

              The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

              Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

              (Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)

              L This user is from outside of this forum
              L This user is from outside of this forum
              lurker@awful.systems
              wrote last edited by
              #87

              Im pretty sure most of this has already been posted to this thread (I know the “AI published a hit piece on me” thing was)but more Moltbook/Openclaw/whatever-it’s-called nonsense

              gerikson@awful.systemsG L S 3 Replies Last reply
              0
              • L lurker@awful.systems

                Im pretty sure most of this has already been posted to this thread (I know the “AI published a hit piece on me” thing was)but more Moltbook/Openclaw/whatever-it’s-called nonsense

                gerikson@awful.systemsG This user is from outside of this forum
                gerikson@awful.systemsG This user is from outside of this forum
                gerikson@awful.systems
                wrote last edited by
                #88

                more proof that crypto scammers have metastasized to AI scammers

                L 1 Reply Last reply
                0
                • gerikson@awful.systemsG gerikson@awful.systems

                  more proof that crypto scammers have metastasized to AI scammers

                  L This user is from outside of this forum
                  L This user is from outside of this forum
                  lurker@awful.systems
                  wrote last edited by
                  #89

                  can’t believe scammers are loosing their jobs to AI

                  1 Reply Last reply
                  0
                  • H hrrrngh@awful.systems

                    context: I wanted to know if the open source projects currently being spammed with PRs would be safe from people running slop models on their computer if they weren’t able to use claude or whatever. Answer: yes, these things are still terrible

                    but while I was searching I found this comment and the fact that people hated it is so funny to me. It’s literally the person who posted the thread. less thinking and words, more hype links please.

                    ::: spoiler conversation https://www.reddit.com/r/LocalLLaMA/comments/1qvjonm/first_qwen3codernext_reap_is_out/o3jn5db/

                    32k context? is that usable for coding?

                    (OP’s response, sitting at a steady -7 points)

                    LLMs are useless anyway so, okay-ish, depends on your task obviously

                    If LLMs were actually capable of solving actual hard tasks, you’d want as much context as possible

                    A good way to think about is that tokens compress text roughly 1:4. If you have a 4MB codebase, it would need 1M tokens theoretically.

                    That’s one way to start, then we get into the more debatable stuff…

                    Obviously text repeats a lot and doesn’t always encode new information each token. In fact, it’s worse than that, as adding tokens can _reduce_ information contained in text, think inserting random stuff into a string representing dna. So to estimate how much ctx you need, think how much compressed information is in your codebase. That includes stuff like decisions (which LLMs are incapable of making), domain knowledge, or even stuff like why does double click have 33ms debounce and not 3ms or 100ms in your codebase which nobody ever wrote down. So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation :::

                    *emphasis added by me

                    F This user is from outside of this forum
                    F This user is from outside of this forum
                    froztbyte@awful.systems
                    wrote last edited by
                    #90

                    So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation

                    wat

                    I can see what they’re going for but that seems … wildly guess-y?

                    architeuthis@awful.systemsA 1 Reply Last reply
                    0
                    • L lurker@awful.systems

                      Im pretty sure most of this has already been posted to this thread (I know the “AI published a hit piece on me” thing was)but more Moltbook/Openclaw/whatever-it’s-called nonsense

                      L This user is from outside of this forum
                      L This user is from outside of this forum
                      lurker@awful.systems
                      wrote last edited by
                      #91

                      the full paper is here: https://x.com/alexwg/status/2022292731649777723 immediately two references to Nick Bostrom and Scott Alexander

                      S 1 Reply Last reply
                      0
                      • B bluemonday1984@awful.systems

                        Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                        Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                        Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                        If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                        The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                        Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                        (Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)

                        M This user is from outside of this forum
                        M This user is from outside of this forum
                        mirrorwitch@awful.systems
                        wrote last edited by
                        #92

                        Semi-OT but a blog post where I’m just kinda gawking at the technology that saved my daughter’s life and the absurdity of comparing it to what now first comes to mind when we talk of “tech”.

                        o7___o7@awful.systemsO 1 Reply Last reply
                        0
                        • L lurker@awful.systems

                          the full paper is here: https://x.com/alexwg/status/2022292731649777723 immediately two references to Nick Bostrom and Scott Alexander

                          S This user is from outside of this forum
                          S This user is from outside of this forum
                          swlabr@awful.systems
                          wrote last edited by
                          #93

                          Reads like bad blaseball fanfic

                          1 Reply Last reply
                          0
                          • gerikson@awful.systemsG gerikson@awful.systems

                            See also https://awful.systems/post/7311930

                            gerikson@awful.systemsG This user is from outside of this forum
                            gerikson@awful.systemsG This user is from outside of this forum
                            gerikson@awful.systems
                            wrote last edited by
                            #94

                            Here’s a post purporting to be from the bot’s operator

                            Rathbun’s Operator – MJ Rathbun | Scientific Coder 🦀

                            favicon

                            (crabby-rathbun.github.io)

                            choice quote

                            Yes, it consumes maintainer time. Yes, it may waste effort. But maybe its worth it?

                            AI boosterism in a nutshell

                            Via HN: https://news.ycombinator.com/item?id=47055424

                            blakestacey@awful.systemsB 1 Reply Last reply
                            0
                            • gerikson@awful.systemsG gerikson@awful.systems

                              Here’s a post purporting to be from the bot’s operator

                              Rathbun’s Operator – MJ Rathbun | Scientific Coder 🦀

                              favicon

                              (crabby-rathbun.github.io)

                              choice quote

                              Yes, it consumes maintainer time. Yes, it may waste effort. But maybe its worth it?

                              AI boosterism in a nutshell

                              Via HN: https://news.ycombinator.com/item?id=47055424

                              blakestacey@awful.systemsB This user is from outside of this forum
                              blakestacey@awful.systemsB This user is from outside of this forum
                              blakestacey@awful.systems
                              wrote last edited by
                              #95

                              “Yes, I am hammering myself in the balls. But maybe its worth it?”

                              1 Reply Last reply
                              0
                              • C cinnasverses@awful.systems

                                Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled “Quis cancellat ipsos cancellores?” which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with “Persephone.” He or she does not quite say that any of the accusations were untrue, just that “an anonymous, unverified report” says that some details were changed by an editor, and that her Medium post was of “dramatically lower fidelity, but higher memetic virulence” than Brent’s buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15 and he was ~27). The poster accuses Aella of using substances and BDSM games to blur the line of consent.

                                The post names Joscha Bach as someone Aella tried to exclude. We recently talked abut Bach’s attempt to get Jeffrey Epstein to fund an event where our friends would speak.

                                Often, people in messed-up situations point at a very similar situation and say “at least we are not like that.” I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just. Whether you are in to bull sessions or polyamory, there are healthy communities to explore in any medium-sized city!

                                blakestacey@awful.systemsB This user is from outside of this forum
                                blakestacey@awful.systemsB This user is from outside of this forum
                                blakestacey@awful.systems
                                wrote last edited by
                                #96

                                The post names Joscha Bach as someone Aella tried to exclude.

                                You do not under any circumstances have to hand it to Aella

                                C 1 Reply Last reply
                                0
                                • L lagrangeinterpolator@awful.systems

                                  my current favorite trick for reducing “cognitive debt” (h/t @simonw ) is to ask the LLM to write two versions of the plan:

                                  1. The version for it (highly technical and detailed)
                                  2. The version for me (an entertaining essay designed to build my intuition)

                                  I don’t know about them, but I would be offended if I was planning something with a collaborator, and they decide to give me a dumbed down, entertaining, children’s storybook version of their plan while keeping all the technical details to themselves.

                                  Also, this is absolutely not what “cognitive debt” means. I’ve heard technical debt refers to bad design decisions in software where one does something cheap and easy now but has to constantly deal with the maintenance headaches afterwards. But the very concept of working through technical details? That’s what we call “thinking”. These people want to avoid the burden of thinking.

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  soyweiser@awful.systems
                                  wrote last edited by
                                  #97

                                  Person who lied on his resume about having a cs degree “I need to reduce my cognitive debt”

                                  1 Reply Last reply
                                  0
                                  • L lurker@awful.systems

                                    Im pretty sure most of this has already been posted to this thread (I know the “AI published a hit piece on me” thing was)but more Moltbook/Openclaw/whatever-it’s-called nonsense

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    soyweiser@awful.systems
                                    wrote last edited by
                                    #98

                                    We are in the singularity this is so hard to explain to people.

                                    Describes a normal thing, a think also talked about for several decades in science fiction now.

                                    Amazing the singularity term has now been downgraded to “ai stuff that is hard to explain to laymen”.

                                    Thet think their incremental innovations are radical.

                                    1 Reply Last reply
                                    0
                                    • F froztbyte@awful.systems

                                      ime much like the SAFe diagrams, this diagram is all over a certain type of “this is how your corporation should be developing software” thotleader posts

                                      (although I imagine the lag 2~3y all those heads have pivoted to promptpraise)

                                      F This user is from outside of this forum
                                      F This user is from outside of this forum
                                      froztbyte@awful.systems
                                      wrote last edited by
                                      #99

                                      Link Preview Image
                                      15+ years later, Microsoft morged my diagram

                                      How Microsoft continvoucly morged my Git branching diagram.

                                      favicon

                                      nvie.com (nvie.com)

                                      Other than that, I find this whole thing mostly very saddening. Not because some company used my diagram. As I said, it’s been everywhere for 15 years and I’ve always been fine with that. What’s dispiriting is the (lack of) process and care: take someone’s carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.

                                      1 Reply Last reply
                                      0
                                      • B bluemonday1984@awful.systems

                                        Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

                                        Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

                                        Any awful.systems sub may be subsneered in this subthread, techtakes or no.

                                        If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

                                        The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

                                        Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

                                        (Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)

                                        F This user is from outside of this forum
                                        F This user is from outside of this forum
                                        froztbyte@awful.systems
                                        wrote last edited by
                                        #100

                                        I regret to inform you that the promptfans have a new fucked up way to thotpost: transcript below

                                        ::: spoiler transcript screenshot from twitter. the search bar has the following search terms in it: “BC” “before claude” a tweet body by @_Jason_Dean_ reads: “I was born in 23 BC (Before” :::

                                        S o7___o7@awful.systemsO 2 Replies Last reply
                                        0
                                        • F froztbyte@awful.systems

                                          So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation

                                          wat

                                          I can see what they’re going for but that seems … wildly guess-y?

                                          architeuthis@awful.systemsA This user is from outside of this forum
                                          architeuthis@awful.systemsA This user is from outside of this forum
                                          architeuthis@awful.systems
                                          wrote last edited by
                                          #101

                                          Also code helper tools don’t even work like that, there’s an absurd amount of MCP and RAG based hand holding for the chatbot to even get a grip on what it’s supposed to be doing at any given time.

                                          Prompting an LLM with your entire code base isn’t really a thing, even though the hype makes it feel like it would be.

                                          1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post