Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

Scheduled Pinned Locked Moved Uncategorized
llmsclaudechatgpt
70 Posts 37 Posters 122 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Joe BrockmeierJ Joe Brockmeier

    @larsmb @em_and_future_cats Well, as designed, they are -- I'm not sure that's a built-in limitation of LLMs or not. To be fair, I am not an expert on the tech.

    As something of an aside...

    It would be really interesting if you could pair the natural language instruction input with predictable output.

    That is, for example -- if I could query, say, all the data in Wikipedia but get only accurate output. Or if you had something like Ansible with natural-language playbook creation.

    "Hey, Ansible -- I want a playbook that will install all of the packages I have currently installed and retain my dotfiles" (or something) and be guaranteed accurate output... that would be amazing.

    Except that I also worry about losing skills to do those things. I worry about the loss of incidental knowledge when researching if a computer can return *only* what you ask for and sacrifice accidental discovery.

    (I also still think search engines were something of a mistake and miss Internet directories. Yeah, I'm fun at parties....)

    Em & future cats 🇺🇦🐈🏳️‍🌈E This user is from outside of this forum
    Em & future cats 🇺🇦🐈🏳️‍🌈E This user is from outside of this forum
    Em & future cats 🇺🇦🐈🏳️‍🌈
    wrote on last edited by
    #17

    @jzb @larsmb
    This too! Granted, if you’ve got to the phd level through education before llms you are probably okay with using it to “finish up” but I really worry about younger generations (even myself) when it comes to all of this

    1 Reply Last reply
    0
    • Joe BrockmeierJ Joe Brockmeier

      @larsmb @em_and_future_cats Well, as designed, they are -- I'm not sure that's a built-in limitation of LLMs or not. To be fair, I am not an expert on the tech.

      As something of an aside...

      It would be really interesting if you could pair the natural language instruction input with predictable output.

      That is, for example -- if I could query, say, all the data in Wikipedia but get only accurate output. Or if you had something like Ansible with natural-language playbook creation.

      "Hey, Ansible -- I want a playbook that will install all of the packages I have currently installed and retain my dotfiles" (or something) and be guaranteed accurate output... that would be amazing.

      Except that I also worry about losing skills to do those things. I worry about the loss of incidental knowledge when researching if a computer can return *only* what you ask for and sacrifice accidental discovery.

      (I also still think search engines were something of a mistake and miss Internet directories. Yeah, I'm fun at parties....)

      Lars Marowsky-Brée 😷L This user is from outside of this forum
      Lars Marowsky-Brée 😷L This user is from outside of this forum
      Lars Marowsky-Brée 😷
      wrote on last edited by
      #18

      @jzb Is is an inherent limitation of how LLMs currently exist and are implemented.
      They do strive to minimize it through scale, but it's also a reason why they do get "creative" in their answers.
      Like with any stochastic algorithm, they perform best if you can (cheaply) validate the result. e.g., does a program pass the tests still?

      This is much harder for complex questions about the real world.

      @em_and_future_cats

      Florian Berger (privat)F 1 Reply Last reply
      0
      • Joe BrockmeierJ Joe Brockmeier

        A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

        Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

        “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

        Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

        Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

        CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

        The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

        What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

        You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

        Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

        MattM This user is from outside of this forum
        MattM This user is from outside of this forum
        Matt
        wrote on last edited by
        #19

        @jzb
        Yeah, no. It's the same theme Marx recognized some 150 years ago:

        John Stuart Mill says in his “Principles of Political Economy":
        “It is questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being.”
        That is, however, by no means the aim of the capitalistic application of machinery. Like every other increase in the productiveness of labour, machinery is intended to cheapen commodities, and, by shortening that portion of the working-day, in which the labourer works for himself, to lengthen the other portion that he gives, without an equivalent, to the capitalist. In short, it is a means for producing surplus-value.

        [Capital, IV.15]

        1 Reply Last reply
        0
        • Joe BrockmeierJ Joe Brockmeier

          @larsmb @em_and_future_cats Well, as designed, they are -- I'm not sure that's a built-in limitation of LLMs or not. To be fair, I am not an expert on the tech.

          As something of an aside...

          It would be really interesting if you could pair the natural language instruction input with predictable output.

          That is, for example -- if I could query, say, all the data in Wikipedia but get only accurate output. Or if you had something like Ansible with natural-language playbook creation.

          "Hey, Ansible -- I want a playbook that will install all of the packages I have currently installed and retain my dotfiles" (or something) and be guaranteed accurate output... that would be amazing.

          Except that I also worry about losing skills to do those things. I worry about the loss of incidental knowledge when researching if a computer can return *only* what you ask for and sacrifice accidental discovery.

          (I also still think search engines were something of a mistake and miss Internet directories. Yeah, I'm fun at parties....)

          Lars Marowsky-Brée 😷L This user is from outside of this forum
          Lars Marowsky-Brée 😷L This user is from outside of this forum
          Lars Marowsky-Brée 😷
          wrote on last edited by
          #20

          @jzb On the plus side, Ansible (because it's so freaking widespread and well documented, and it is mostly fairly easy to tell if the answer would do the thing one asked for) is a fairly successful area to apply GenAI to.
          Combine with ansible-lint, shellcheck etc in the precommit hook, and the results are actually rather impressive.

          1 Reply Last reply
          0
          • Joe BrockmeierJ Joe Brockmeier

            A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

            Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

            “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

            Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

            Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

            CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

            The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

            What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

            You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

            Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

            Ryek Darkener_ This user is from outside of this forum
            Ryek Darkener_ This user is from outside of this forum
            Ryek Darkener
            wrote on last edited by
            #21

            @jzb

            “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

            This is already happening. So what’s the point? 😉

            Back to business: You are right. Every person who is "just doing the job" is endangered losing exactly this job, as AI will do it better and more efficiently. So the solution is to have a society of individuals who are smart enough to cope with it in an intelligent way. If not, the tech bros might win for a while, before all collapses.

            Kraftwerk-Das Model CollapseD 1 Reply Last reply
            0
            • Joe BrockmeierJ Joe Brockmeier

              A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

              Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

              “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

              Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

              Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

              CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

              The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

              What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

              You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

              Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

              tbodtT This user is from outside of this forum
              tbodtT This user is from outside of this forum
              tbodt
              wrote on last edited by
              #22

              @jzb https://en.wikipedia.org/wiki/Cluely

              1 Reply Last reply
              0
              • Joe BrockmeierJ Joe Brockmeier

                A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

                Dmytro OleksiukD This user is from outside of this forum
                Dmytro OleksiukD This user is from outside of this forum
                Dmytro Oleksiuk
                wrote on last edited by
                #23

                @jzb “Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers” — heh, most of non-nerd people who works remotely that I know already using LLMs for this exact purpose without any marketing

                1 Reply Last reply
                0
                • Joe BrockmeierJ Joe Brockmeier

                  @bexelbie From my POV the answer to "why not both?" is that you can't really separate them right now.

                  Adoption of the commercial tools for whatever purpose does more to pave the way to the negative outcomes than any positive ones.

                  I think the "overemployed" thing is more of a statistical anomaly than a real thing.

                  Perhaps I'm just old and inflexible, though. Ideologically, I mean. I know I'm not very flexible physically these days...

                  Lord BowlichL This user is from outside of this forum
                  Lord BowlichL This user is from outside of this forum
                  Lord Bowlich
                  wrote on last edited by
                  #24

                  @jzb @bexelbie

                  My answer to "why not both" is that workers adopting AI to undercut employers doesn't resolve the underlying problem which is bullshit jobs.

                  I'd much rather work 20 productive hours in the week and create high quality work during that time instead of filing TPS reports for my corporate overlords.

                  Brian "bex" ExelbierdB 1 Reply Last reply
                  0
                  • Joe BrockmeierJ Joe Brockmeier

                    A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

                    rieseR This user is from outside of this forum
                    rieseR This user is from outside of this forum
                    riese
                    wrote on last edited by
                    #25

                    @jzb yea. If any(!) LLM could do that in the least usage would be strictly limited and very expensive.
                    LLMs only ever perform usable in fields you're not an expert in. (That's why your average top brass thinks it's useful)

                    Right now they're trying and selling it as tool to cut out the employee. They claim, all the work's done and they don't have to pay a person...

                    1 Reply Last reply
                    0
                    • Joe BrockmeierJ Joe Brockmeier

                      A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                      Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                      “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                      Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                      Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                      CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                      The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                      What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                      You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                      Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

                      SeanBurlington 🌈 🕊️S This user is from outside of this forum
                      SeanBurlington 🌈 🕊️S This user is from outside of this forum
                      SeanBurlington 🌈 🕊️
                      wrote on last edited by
                      #26

                      @jzb This would be happening if LLMs actually worked as advertised

                      1 Reply Last reply
                      0
                      • Joe BrockmeierJ Joe Brockmeier

                        @bexelbie From my POV the answer to "why not both?" is that you can't really separate them right now.

                        Adoption of the commercial tools for whatever purpose does more to pave the way to the negative outcomes than any positive ones.

                        I think the "overemployed" thing is more of a statistical anomaly than a real thing.

                        Perhaps I'm just old and inflexible, though. Ideologically, I mean. I know I'm not very flexible physically these days...

                        Brian "bex" ExelbierdB This user is from outside of this forum
                        Brian "bex" ExelbierdB This user is from outside of this forum
                        Brian "bex" Exelbierd
                        wrote on last edited by
                        #27

                        @jzb that’s fair. I think it’s impossible for any tool to not have both a worker freedom use and a worker subjugation use. It often depends on who gets there first and is correlated with privilege.

                        1 Reply Last reply
                        0
                        • Lord BowlichL Lord Bowlich

                          @jzb @bexelbie

                          My answer to "why not both" is that workers adopting AI to undercut employers doesn't resolve the underlying problem which is bullshit jobs.

                          I'd much rather work 20 productive hours in the week and create high quality work during that time instead of filing TPS reports for my corporate overlords.

                          Brian "bex" ExelbierdB This user is from outside of this forum
                          Brian "bex" ExelbierdB This user is from outside of this forum
                          Brian "bex" Exelbierd
                          wrote on last edited by
                          #28

                          @lordbowlich @jzb everyone attacks TPS reports but, at least in my small sample size, the overwhelming majority of this isn’t “bullshit jobs.” It’s a symptom of regulations, information gathering, under resourcing, and, critically, domains you aren’t a master of. Many professions are filled with people whose conceit leads them to believe they understand the work of everyone else better than those people do themselves.

                          Lord BowlichL 1 Reply Last reply
                          0
                          • Brian "bex" ExelbierdB Brian "bex" Exelbierd

                            @lordbowlich @jzb everyone attacks TPS reports but, at least in my small sample size, the overwhelming majority of this isn’t “bullshit jobs.” It’s a symptom of regulations, information gathering, under resourcing, and, critically, domains you aren’t a master of. Many professions are filled with people whose conceit leads them to believe they understand the work of everyone else better than those people do themselves.

                            Lord BowlichL This user is from outside of this forum
                            Lord BowlichL This user is from outside of this forum
                            Lord Bowlich
                            wrote on last edited by
                            #29

                            @bexelbie @jzb

                            TPS reports is just an example.

                            No, bureaucracy is the bullshit. See James C. Scott's "Seeing Like a State." Doesn't matter if it's required to meet regulations or because of under resourcing. Push the decision making to lower tiers and trust the experts in those lower tiers to make the decisions. Get rid of hierarchical systems of control and you get rid of the bullshit jobs.

                            1 Reply Last reply
                            0
                            • Joe BrockmeierJ Joe Brockmeier

                              A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                              Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                              “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                              Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                              Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                              CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                              The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                              What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                              You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                              Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

                              ManVsXerox: Resistful DingusM This user is from outside of this forum
                              ManVsXerox: Resistful DingusM This user is from outside of this forum
                              ManVsXerox: Resistful Dingus
                              wrote on last edited by
                              #30

                              @jzb why do you think 3d printing lost its investing luster? You can own a 3d printer. You can only rent an LLM.

                              1 Reply Last reply
                              0
                              • Joe BrockmeierJ Joe Brockmeier

                                A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                                Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                                “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                                Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                                Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                                CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                                The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                                What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                                You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                                Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

                                Read along with MattM This user is from outside of this forum
                                Read along with MattM This user is from outside of this forum
                                Read along with Matt
                                wrote on last edited by
                                #31

                                The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                                I feel like we actually did briefly see this early on when basically the first actual real-world use-case was students automating bullshit papers.

                                (Certainly, they reached for plagiarism, the correct word, but leveled at the students and not at all at the service provider.)

                                @jzb

                                Joe BrockmeierJ 1 Reply Last reply
                                0
                                • Joe BrockmeierJ Joe Brockmeier

                                  A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                                  Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                                  “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                                  Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                                  Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                                  CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                                  The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                                  What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                                  You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                                  Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

                                  TubemeisterT This user is from outside of this forum
                                  TubemeisterT This user is from outside of this forum
                                  Tubemeister
                                  wrote on last edited by
                                  #32

                                  @jzb Ayup. If all the “AI will replace humans” pushers were also coming up with plans for UBI at a decent level or some kind of post money Star Trek future…

                                  Well, I’d still call them nuts because the tech is nowhere near good enough, but at least it would be a plan.

                                  But you never hear anything about the human side of things and a few billion people are not just going away.

                                  So far then, the idea seems to be the usual fuck you I’m ok of the looting class.

                                  1 Reply Last reply
                                  0
                                  • Joe BrockmeierJ Joe Brockmeier

                                    A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                                    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                                    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                                    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                                    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                                    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                                    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                                    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                                    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                                    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

                                    Gabriel PettierT This user is from outside of this forum
                                    Gabriel PettierT This user is from outside of this forum
                                    Gabriel Pettier
                                    wrote on last edited by
                                    #33

                                    @jzb i think the problem is more that workers have much greater work ethics than generally acknowledged, and if a tool allow them to work faster, they'll do more work, not not reclaim more time.

                                    but more than une explanation can be true at the same time.

                                    Wolf480plW 1 Reply Last reply
                                    0
                                    • Read along with MattM Read along with Matt

                                      The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                                      I feel like we actually did briefly see this early on when basically the first actual real-world use-case was students automating bullshit papers.

                                      (Certainly, they reached for plagiarism, the correct word, but leveled at the students and not at all at the service provider.)

                                      @jzb

                                      Joe BrockmeierJ This user is from outside of this forum
                                      Joe BrockmeierJ This user is from outside of this forum
                                      Joe Brockmeier
                                      wrote on last edited by
                                      #34

                                      @matt That's true, though I'm not sure I'd call using LLMs to do homework pro-worker, either. It's kind of a different tangent.

                                      Read along with MattM 1 Reply Last reply
                                      0
                                      • Joe BrockmeierJ Joe Brockmeier

                                        A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                                        Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                                        “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                                        Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                                        Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                                        CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                                        The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                                        What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                                        You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                                        Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt

                                        Martin EscardoM This user is from outside of this forum
                                        Martin EscardoM This user is from outside of this forum
                                        Martin Escardo
                                        wrote on last edited by
                                        #35

                                        @jzb I've just asked ChatGPT to summarize your post.

                                        It said: "If they were good for you, shareholders would hate them."

                                        🙂

                                        Joe BrockmeierJ 1 Reply Last reply
                                        0
                                        • Martin EscardoM Martin Escardo

                                          @jzb I've just asked ChatGPT to summarize your post.

                                          It said: "If they were good for you, shareholders would hate them."

                                          🙂

                                          Joe BrockmeierJ This user is from outside of this forum
                                          Joe BrockmeierJ This user is from outside of this forum
                                          Joe Brockmeier
                                          wrote on last edited by
                                          #36

                                          @MartinEscardo well played. I should’ve expected that, but in my defense… I was really tired. 😂

                                          Martin EscardoM 1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post