Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. The LLM discourse on the Fediverse has really irked me the last few days.

The LLM discourse on the Fediverse has really irked me the last few days.

Scheduled Pinned Locked Moved Uncategorized
61 Posts 42 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • PhilP Phil

    @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
    Hasn't been my experience. What have you tested it with?

    Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).

    My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.

    I have found the available models entirely sufficient for these tasks.

    Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.

    Now to be clear - I'm not saying they're always
    accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.

    I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.

    But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.

    Ultimately it's on the user to ensure the tool's output meets requirements.

    Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.

    I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.

    I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.

    (Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)

    xs4me2X This user is from outside of this forum
    xs4me2X This user is from outside of this forum
    xs4me2
    wrote last edited by
    #44

    @phil @lproven @reading_recluse

    Exactly, and as always truth and reality are nuanced. I will be using it, and I will use my critical thinking (always).

    1 Reply Last reply
    0
    • social elephant in the roomT social elephant in the room

      @papageier @reading_recluse machine-woven cloth was answering an essential need in a profitable capitalistic way. Can we say the same about LLM?

      I think it is not inevitable, but time will tell.

      Johnny ‘Decimal’ NobleJ This user is from outside of this forum
      Johnny ‘Decimal’ NobleJ This user is from outside of this forum
      Johnny ‘Decimal’ Noble
      wrote last edited by
      #45

      @tseitr @papageier @reading_recluse My problem with this framing is: who gets to decide?

      Define 'essential'. Is a new generation of MacBooks 'essential'? Not really. The ones we have are amazing. But nobody's boycotting the progress being made in chip design.

      But the anti-LLM crowd seem to have decided: not having LLMs is 'enough'. Having them is superfluous. They're not 'needed'.

      I get the pushback. I'll never use one to write prose, because prose comes from my human heart.

      But to deny their utility in the world of code generation is to be dogmatic. The vast, vast majority of code generation isn't art: it's the rote stitching together of existing pieces to make a new thing.

      Claude is _much_ better at that than I am. If properly controlled by me the result is better and more secure.

      So, I use Claude. Just like I use an IDE and a higher-level language and just like I deploy to an edge network run by someone else vs. standing up my own. Because doing that is better than not doing that.

      skuaS 1 Reply Last reply
      0
      • Reading RecluseR Reading Recluse

        The LLM discourse on the Fediverse has really irked me the last few days.

        Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

        LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

        Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

        Flash Mob Of OneF This user is from outside of this forum
        Flash Mob Of OneF This user is from outside of this forum
        Flash Mob Of One
        wrote last edited by
        #46

        @reading_recluse This take bugs me so much. Calling boycotting of LLM's 'purity culture' is the dumbest ass take since Dems smeared Bernie as a sexist.

        1 Reply Last reply
        0
        • lprovenL lproven

          @phil @xs4me2 @reading_recluse My current favourite paper on this:

          Link Preview Image
          When ChatGPT summarises, it actually does nothing of the kind.

          One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn't summarising at all, it only looks like it. What it does is something else and that something else only becomes summarising in very specific circumstances.

          favicon

          R&A IT Strategy & Architecture (ea.rna.nl)

          xs4me2X This user is from outside of this forum
          xs4me2X This user is from outside of this forum
          xs4me2
          wrote last edited by
          #47

          @lproven @phil @reading_recluse

          There is no substitute for reading the final material of your subject to study by yourself. Line by line and internalizing it. I remember the days of our paper scientific library where I would stay a whole afternoon and would review Phys Rev B, Applied Physics, Applied Optics and more on the topic of my research and in the end had a stack of paper copies I took home to read. Basically that has not changed by online use but got so much more fast and efficient.

          1 Reply Last reply
          0
          • xs4me2X xs4me2

            @lproven @dynamite_ready @reading_recluse

            In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

            Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

            Ben TaskerB This user is from outside of this forum
            Ben TaskerB This user is from outside of this forum
            Ben Tasker
            wrote last edited by
            #48

            @xs4me2 @lproven @dynamite_ready @reading_recluse

            What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).

            I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something

            xs4me2X lprovenL 2 Replies Last reply
            0
            • Ben TaskerB Ben Tasker

              @xs4me2 @lproven @dynamite_ready @reading_recluse

              What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).

              I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something

              xs4me2X This user is from outside of this forum
              xs4me2X This user is from outside of this forum
              xs4me2
              wrote last edited by
              #49

              @ben @lproven @dynamite_ready @reading_recluse

              I am suggesting that competent user can use tools in the right way indeed and only by their in-depth knowledge of them. You can call that craftsmanship, experience or simply domain knowledge.

              It does not imply that tools nor LLM are useless nor that they are without danger. A sharp chisel can cut of your finger. A poorly configured LLM can provide you with a load of nonsense...

              lprovenL 1 Reply Last reply
              0
              • lprovenL lproven

                @phil @xs4me2 @reading_recluse My current favourite paper on this:

                Link Preview Image
                When ChatGPT summarises, it actually does nothing of the kind.

                One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn't summarising at all, it only looks like it. What it does is something else and that something else only becomes summarising in very specific circumstances.

                favicon

                R&A IT Strategy & Architecture (ea.rna.nl)

                PhilP This user is from outside of this forum
                PhilP This user is from outside of this forum
                Phil
                wrote last edited by
                #50

                @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
                1. Paper from nearly 2 years ago. A lot has changed. Not to mention the 'test' the author (can't find their name, sorry) did is pretty dumb. It's much better to use an API, where you can control the full input pipeline to ensure the vendor isn't adding hidden instructions without your knowledge.
                2. I already addressed the point in my previous comment - it's on the user to verify that
                tools have correct output. Relying on an LLM to do the reading in one's stead is a recipe for disaster.

                You haven't said anything about YOUR use-case, experience, or the tests you tried.

                I'm genuinely curious, what do you imagine using an LLM is like?

                The reason I ask is because a lot of the criticism and panicking (sometimes crossing into outright disrespect and bigotry) I see online comes from an assumption that using an LLM is predicated on turning off one's brain and taking the output at face value... something that we shouldn't be doing with any software anyway.

                I guess put another way: I don't believe that the problems people attribute to LLMs are specific to LLMs. How many instances were there where management/ execs took Excel output as fact, when the formulas were set up wrong?

                These statistical models are no different.

                1 Reply Last reply
                0
                • Reading RecluseR Reading Recluse

                  The LLM discourse on the Fediverse has really irked me the last few days.

                  Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                  LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                  Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                  Little Art HistoriesA This user is from outside of this forum
                  Little Art HistoriesA This user is from outside of this forum
                  Little Art Histories
                  wrote last edited by
                  #51

                  @reading_recluse Completely going d'accors. Also LLM produced "art" is so dull. I don't want to read it. For some reason my brain starts to shut down when reading an LLM produced text. I forget the picture as soon as I close it. Same with music. AI generated voices are so grating. The artificiality of it all makes me mad. It doesn't challenge me, it doesn't tell me anything, there is nothing intentional behind it. It's just - nothing. And it destroys the environment.

                  1 Reply Last reply
                  0
                  • Reading RecluseR Reading Recluse

                    The LLM discourse on the Fediverse has really irked me the last few days.

                    Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                    LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                    Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                    Ox1deO This user is from outside of this forum
                    Ox1deO This user is from outside of this forum
                    Ox1de
                    wrote last edited by
                    #52

                    @reading_recluse u do u

                    1 Reply Last reply
                    0
                    • Reading RecluseR Reading Recluse

                      The LLM discourse on the Fediverse has really irked me the last few days.

                      Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                      LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                      Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                      FOSStasticF This user is from outside of this forum
                      FOSStasticF This user is from outside of this forum
                      FOSStastic
                      wrote last edited by
                      #53

                      @reading_recluse I have to disagree on one thing: I've used LLMs for complex social issues I faced in real-life in the past and they (in hindsight) correctly determined that it wasn't my fault or anything wrong with me. So for me, they improved my mental health in difficult times and successfully prevented me from getting depressed.

                      So there are definitely beneficial use cases for them. But they're also very overrated and love to hallucinate a lot and are unable to comprehend nuance in writing.

                      1 Reply Last reply
                      0
                      • Reading RecluseR Reading Recluse

                        The LLM discourse on the Fediverse has really irked me the last few days.

                        Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                        LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                        Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                        Fergabell   😷  🌱F This user is from outside of this forum
                        Fergabell   😷  🌱F This user is from outside of this forum
                        Fergabell 😷 🌱
                        wrote last edited by
                        #54

                        @reading_recluse What disgusts me is the total disconnect from the natural world and the devastating effects of human activity in most forms on nature. We are hurtling toward ecocide and massive planetary collapse of current life forms. And what do they do? Grasp and exploit and posture and perform and strut in their massive ignorance of how a closed, interdependent, symbiotic living system actually works. The human supremacy religion means the death of all of us and a magical world full of beauty and wonder gone before its time.

                        Reading RecluseR 1 Reply Last reply
                        0
                        • Fergabell   😷  🌱F Fergabell 😷 🌱

                          @reading_recluse What disgusts me is the total disconnect from the natural world and the devastating effects of human activity in most forms on nature. We are hurtling toward ecocide and massive planetary collapse of current life forms. And what do they do? Grasp and exploit and posture and perform and strut in their massive ignorance of how a closed, interdependent, symbiotic living system actually works. The human supremacy religion means the death of all of us and a magical world full of beauty and wonder gone before its time.

                          Reading RecluseR This user is from outside of this forum
                          Reading RecluseR This user is from outside of this forum
                          Reading Recluse
                          wrote last edited by
                          #55

                          @fergabell Completely true, I fully agree.

                          I really dislike that most LLM-defenders in my comments right now say something like: "Well actually, in this specific case LLM usage was actually helpful for me personally, so..."

                          Even entertaining the thought that it's somehow useful for someone somewhere, it doesn't erase the extreme damage it's doing to the world and us collectively, and the massive scale of exploitation it's engaging in to keep it all afloat.

                          1 Reply Last reply
                          0
                          • Ben TaskerB Ben Tasker

                            @xs4me2 @lproven @dynamite_ready @reading_recluse

                            What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).

                            I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something

                            lprovenL This user is from outside of this forum
                            lprovenL This user is from outside of this forum
                            lproven
                            wrote last edited by
                            #56

                            @ben @xs4me2 @dynamite_ready @reading_recluse No, that is not what I am suggesting at all.

                            You are trying to interpret my position on this through the lens of what *you* think they are good for.

                            1 Reply Last reply
                            0
                            • xs4me2X xs4me2

                              @ben @lproven @dynamite_ready @reading_recluse

                              I am suggesting that competent user can use tools in the right way indeed and only by their in-depth knowledge of them. You can call that craftsmanship, experience or simply domain knowledge.

                              It does not imply that tools nor LLM are useless nor that they are without danger. A sharp chisel can cut of your finger. A poorly configured LLM can provide you with a load of nonsense...

                              lprovenL This user is from outside of this forum
                              lprovenL This user is from outside of this forum
                              lproven
                              wrote last edited by
                              #57

                              @xs4me2 @ben @dynamite_ready @reading_recluse And I am disagreeing with that. I'm saying they are not appropriate for this stuff, whoever uses them and regardless of how they use them.

                              xs4me2X 1 Reply Last reply
                              0
                              • Reading RecluseR Reading Recluse

                                The LLM discourse on the Fediverse has really irked me the last few days.

                                Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                                LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                                Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                                BredrollB This user is from outside of this forum
                                BredrollB This user is from outside of this forum
                                Bredroll
                                wrote last edited by
                                #58

                                @reading_recluse i feel pretty much the same, save to say, its not to concept of LLMs that I'm against, rather it is the theft of material for training, the impunity of that theft and the determination to disclaim any possibility of giving fair payment or recognition to those whose work is responsible for the stolen data.

                                on top, i really really dislike the cultish hype and forced use going on

                                1 Reply Last reply
                                0
                                • Reading RecluseR Reading Recluse

                                  The LLM discourse on the Fediverse has really irked me the last few days.

                                  Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                                  LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                                  Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                                  FredericF This user is from outside of this forum
                                  FredericF This user is from outside of this forum
                                  Frederic
                                  wrote last edited by
                                  #59

                                  @reading_recluse For me, it doesn't make sense to think about LLMs in pure dogmatic categories like "in favor" or "against". Fact is, LLMs are out there now and won't just disappear, and they CAN be powerful and useful tools if used in a reasonable way. The problem is that a lot of people are currently overusing it and don't reflect enough about when and how to use it, which leads to a lot of AI-generated crap. Maybe humanity just needs more time to finally find a good balance of AI usage.

                                  1 Reply Last reply
                                  0
                                  • Johnny ‘Decimal’ NobleJ Johnny ‘Decimal’ Noble

                                    @tseitr @papageier @reading_recluse My problem with this framing is: who gets to decide?

                                    Define 'essential'. Is a new generation of MacBooks 'essential'? Not really. The ones we have are amazing. But nobody's boycotting the progress being made in chip design.

                                    But the anti-LLM crowd seem to have decided: not having LLMs is 'enough'. Having them is superfluous. They're not 'needed'.

                                    I get the pushback. I'll never use one to write prose, because prose comes from my human heart.

                                    But to deny their utility in the world of code generation is to be dogmatic. The vast, vast majority of code generation isn't art: it's the rote stitching together of existing pieces to make a new thing.

                                    Claude is _much_ better at that than I am. If properly controlled by me the result is better and more secure.

                                    So, I use Claude. Just like I use an IDE and a higher-level language and just like I deploy to an edge network run by someone else vs. standing up my own. Because doing that is better than not doing that.

                                    skuaS This user is from outside of this forum
                                    skuaS This user is from outside of this forum
                                    skua
                                    wrote last edited by
                                    #60

                                    @johnnydecimal @tseitr @papageier @reading_recluse
                                    "nobody's boycotting the progress being made in chip design"

                                    [waving hand]
                                    Over here.
                                    We're boycotting chips that offer us nothing more that we want or need.
                                    Run the web browser, word processor, printer drivers, scan drivers, network connections, do security updates. And don't make the humans waste time with the damned computers. It's a lot to ask but new chips are not going to do this any better.

                                    1 Reply Last reply
                                    0
                                    • lprovenL lproven

                                      @xs4me2 @ben @dynamite_ready @reading_recluse And I am disagreeing with that. I'm saying they are not appropriate for this stuff, whoever uses them and regardless of how they use them.

                                      xs4me2X This user is from outside of this forum
                                      xs4me2X This user is from outside of this forum
                                      xs4me2
                                      wrote last edited by
                                      #61

                                      @lproven @ben @dynamite_ready @reading_recluse

                                      Let us respectfully disagree then.

                                      You are right in the sense that a lot can go wrong as I elaborated on!

                                      Time will tell!

                                      1 Reply Last reply
                                      0
                                      • stux :stux_santa:S stux :stux_santa: shared this topic

                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Login or register to search.
                                      Powered by NodeBB Contributors
                                      • First post
                                        Last post