Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. Some of the replies to my post have now escalated into “simply seize trillions in corporate assets, raze global data centers, salt the earth, and convert everything into cattle pasture.”'n'nWhich… impressive!

Some of the replies to my post have now escalated into “simply seize trillions in corporate assets, raze global data centers, salt the earth, and convert everything into cattle pasture.”'n'nWhich… impressive!

Scheduled Pinned Locked Moved Uncategorized
7 Posts 3 Posters 57 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Chris TrottierA This user is from outside of this forum
    Chris TrottierA This user is from outside of this forum
    Chris Trottier
    wrote on last edited by
    #1

    Some of the replies to my post have now escalated into “simply seize trillions in corporate assets, raze global data centers, salt the earth, and convert everything into cattle pasture.”

    Which… impressive! I admire the confidence. That’s not a policy proposal, that’s a Final Fantasy side quest.

    Let’s be serious for a nanosecond. AI isn’t a scrappy Kickstarter project you can unplug. It’s a multi-trillion-dollar industrial stack spanning Nvidia, Microsoft, Google, Meta, Amazon, Oracle, TSMC, global chip foundries, cloud contracts, defense budgets, university research, and GDP projections. It’s also—awkwardly—the only reason the US economy hasn’t face-planted under current tariff conditions.

    So no, we’re not going to nationalize Microsoft at dawn, bulldoze every data center, and replace them with scenic cow meadows. Not because cows don’t deserve nice things, but because every government currently funding AI sees it as national infrastructure. You don’t get rid of that with a strongly worded Mastodon post.

    The absurdity is: people are treating AI like an artisanal boutique product we can “return to sender.” Meanwhile Nvidia briefly became more valuable than the entire German stock market. Governments aren’t debating whether AI exists, they’re budgeting around it. At this point, the argument isn’t “should AI exist.” It already does. The question is what we do now that it exists.

    Which brings me to what I actually propose:

    • Local models — if we can run spreadsheets at home, we can run language models at home. Infrastructure doesn’t have to belong to surveillance giants.
    • Open source ecosystems – transparency, auditability, forkability, and far fewer shadowy corporate hands rummaging through your data.
    • Co-ops – if we can have community gardens and credit unions, we can have community-owned AI. Artists and users should be stakeholders, not training feedstock.
    • Efficiency & sustainability – the hardware curve is already bending toward smaller, cheaper, less power-hungry models. Progress matters.
    • Regulation & policy – copyright, consent, labor protections, data rights, antitrust, compute limits. The boring adult stuff that actually changes outcomes.
    • Human creativity still wins – taste, context, intention, emotional truth, cultural literacy, and the ability to produce something deeply weird on purpose.

    None of this requires overthrowing the global economy or reenacting Les Mis in a server farm. It just requires accepting reality: the toothpaste is out, it’s not going back in, so maybe stop yelling at the tube and start deciding where the toothbrush goes.

    The goal isn’t surrender. The goal is stewardship.

    mccM May Likes TorontoM 2 Replies Last reply
    0
    • Chris TrottierA Chris Trottier

      Some of the replies to my post have now escalated into “simply seize trillions in corporate assets, raze global data centers, salt the earth, and convert everything into cattle pasture.”

      Which… impressive! I admire the confidence. That’s not a policy proposal, that’s a Final Fantasy side quest.

      Let’s be serious for a nanosecond. AI isn’t a scrappy Kickstarter project you can unplug. It’s a multi-trillion-dollar industrial stack spanning Nvidia, Microsoft, Google, Meta, Amazon, Oracle, TSMC, global chip foundries, cloud contracts, defense budgets, university research, and GDP projections. It’s also—awkwardly—the only reason the US economy hasn’t face-planted under current tariff conditions.

      So no, we’re not going to nationalize Microsoft at dawn, bulldoze every data center, and replace them with scenic cow meadows. Not because cows don’t deserve nice things, but because every government currently funding AI sees it as national infrastructure. You don’t get rid of that with a strongly worded Mastodon post.

      The absurdity is: people are treating AI like an artisanal boutique product we can “return to sender.” Meanwhile Nvidia briefly became more valuable than the entire German stock market. Governments aren’t debating whether AI exists, they’re budgeting around it. At this point, the argument isn’t “should AI exist.” It already does. The question is what we do now that it exists.

      Which brings me to what I actually propose:

      • Local models — if we can run spreadsheets at home, we can run language models at home. Infrastructure doesn’t have to belong to surveillance giants.
      • Open source ecosystems – transparency, auditability, forkability, and far fewer shadowy corporate hands rummaging through your data.
      • Co-ops – if we can have community gardens and credit unions, we can have community-owned AI. Artists and users should be stakeholders, not training feedstock.
      • Efficiency & sustainability – the hardware curve is already bending toward smaller, cheaper, less power-hungry models. Progress matters.
      • Regulation & policy – copyright, consent, labor protections, data rights, antitrust, compute limits. The boring adult stuff that actually changes outcomes.
      • Human creativity still wins – taste, context, intention, emotional truth, cultural literacy, and the ability to produce something deeply weird on purpose.

      None of this requires overthrowing the global economy or reenacting Les Mis in a server farm. It just requires accepting reality: the toothpaste is out, it’s not going back in, so maybe stop yelling at the tube and start deciding where the toothbrush goes.

      The goal isn’t surrender. The goal is stewardship.

      mccM This user is from outside of this forum
      mccM This user is from outside of this forum
      mcc
      wrote on last edited by
      #2

      @atomicpoet "Local models" are still AI models, and so cause harm. "Open source models" are still AI models, and so cause harm. A cooperative is a funny looking corporation and so cooperative-run AI models are still AI models, and so cause harm, and also in the long run become corporate. Efficiency is not possible because the technology doesn't *work*, so the pressure to add more parameters to make it work will be ever present.

      Coexistence with LLMs is surrender. I refuse to surrender.

      Chris TrottierA 1 Reply Last reply
      0
      • mccM mcc

        @atomicpoet "Local models" are still AI models, and so cause harm. "Open source models" are still AI models, and so cause harm. A cooperative is a funny looking corporation and so cooperative-run AI models are still AI models, and so cause harm, and also in the long run become corporate. Efficiency is not possible because the technology doesn't *work*, so the pressure to add more parameters to make it work will be ever present.

        Coexistence with LLMs is surrender. I refuse to surrender.

        Chris TrottierA This user is from outside of this forum
        Chris TrottierA This user is from outside of this forum
        Chris Trottier
        wrote on last edited by
        #3

        mcc That’s a moral stance, not a strategy. Wishing AI away doesn’t protect workers, change ownership, reduce surveillance, or create regulation. And even if one country bans it, another won’t, and will turn that into leverage.

        Perfect outcomes sound righteous, but “all or nothing” usually delivers nothing while the status quo keeps humming. Harm reduction, governance, and alternatives may be imperfect, but they’re how real-world change happens.

        mccM 1 Reply Last reply
        0
        • Chris TrottierA Chris Trottier

          mcc That’s a moral stance, not a strategy. Wishing AI away doesn’t protect workers, change ownership, reduce surveillance, or create regulation. And even if one country bans it, another won’t, and will turn that into leverage.

          Perfect outcomes sound righteous, but “all or nothing” usually delivers nothing while the status quo keeps humming. Harm reduction, governance, and alternatives may be imperfect, but they’re how real-world change happens.

          mccM This user is from outside of this forum
          mccM This user is from outside of this forum
          mcc
          wrote on last edited by
          #4

          @atomicpoet I believe that it is possible to do things.

          Contrariwise, I believe that the "AI" behemoths use compromise as a form of marketing. I believe if you create your "harm reduction" AI they will use its existence, to regulators and the general public, to justify their greater-harm AI approaches. A warm, cuddly, "human-scale" plagiarism machine is optimally formed for their PR. I do *not* believe harm reduction is, in this case, possible.

          But I believe your harm reduction *can* do harm.

          Chris TrottierA 1 Reply Last reply
          0
          • mccM mcc

            @atomicpoet I believe that it is possible to do things.

            Contrariwise, I believe that the "AI" behemoths use compromise as a form of marketing. I believe if you create your "harm reduction" AI they will use its existence, to regulators and the general public, to justify their greater-harm AI approaches. A warm, cuddly, "human-scale" plagiarism machine is optimally formed for their PR. I do *not* believe harm reduction is, in this case, possible.

            But I believe your harm reduction *can* do harm.

            Chris TrottierA This user is from outside of this forum
            Chris TrottierA This user is from outside of this forum
            Chris Trottier
            wrote on last edited by
            #5

            mcc Harm reduction doesn’t exist to make tech giants feel safe. It exists to stop them from owning the entire future by default. If we don’t build smaller, accountable, community-run systems, the only AI left will be the extractive kind you fear.

            1 Reply Last reply
            0
            • Chris TrottierA Chris Trottier

              Some of the replies to my post have now escalated into “simply seize trillions in corporate assets, raze global data centers, salt the earth, and convert everything into cattle pasture.”

              Which… impressive! I admire the confidence. That’s not a policy proposal, that’s a Final Fantasy side quest.

              Let’s be serious for a nanosecond. AI isn’t a scrappy Kickstarter project you can unplug. It’s a multi-trillion-dollar industrial stack spanning Nvidia, Microsoft, Google, Meta, Amazon, Oracle, TSMC, global chip foundries, cloud contracts, defense budgets, university research, and GDP projections. It’s also—awkwardly—the only reason the US economy hasn’t face-planted under current tariff conditions.

              So no, we’re not going to nationalize Microsoft at dawn, bulldoze every data center, and replace them with scenic cow meadows. Not because cows don’t deserve nice things, but because every government currently funding AI sees it as national infrastructure. You don’t get rid of that with a strongly worded Mastodon post.

              The absurdity is: people are treating AI like an artisanal boutique product we can “return to sender.” Meanwhile Nvidia briefly became more valuable than the entire German stock market. Governments aren’t debating whether AI exists, they’re budgeting around it. At this point, the argument isn’t “should AI exist.” It already does. The question is what we do now that it exists.

              Which brings me to what I actually propose:

              • Local models — if we can run spreadsheets at home, we can run language models at home. Infrastructure doesn’t have to belong to surveillance giants.
              • Open source ecosystems – transparency, auditability, forkability, and far fewer shadowy corporate hands rummaging through your data.
              • Co-ops – if we can have community gardens and credit unions, we can have community-owned AI. Artists and users should be stakeholders, not training feedstock.
              • Efficiency & sustainability – the hardware curve is already bending toward smaller, cheaper, less power-hungry models. Progress matters.
              • Regulation & policy – copyright, consent, labor protections, data rights, antitrust, compute limits. The boring adult stuff that actually changes outcomes.
              • Human creativity still wins – taste, context, intention, emotional truth, cultural literacy, and the ability to produce something deeply weird on purpose.

              None of this requires overthrowing the global economy or reenacting Les Mis in a server farm. It just requires accepting reality: the toothpaste is out, it’s not going back in, so maybe stop yelling at the tube and start deciding where the toothbrush goes.

              The goal isn’t surrender. The goal is stewardship.

              May Likes TorontoM This user is from outside of this forum
              May Likes TorontoM This user is from outside of this forum
              May Likes Toronto
              wrote on last edited by
              #6

              @atomicpoet I don't personally believe that regulating models will help. It creates a distraction. What needs protections are human rights and human creations.

              What I do like about your proposal is the ownership model for everything we use. I think if we swap out "AI" for "platform", this becomes a nonbrainer.

              Chris TrottierA 1 Reply Last reply
              0
              • May Likes TorontoM May Likes Toronto

                @atomicpoet I don't personally believe that regulating models will help. It creates a distraction. What needs protections are human rights and human creations.

                What I do like about your proposal is the ownership model for everything we use. I think if we swap out "AI" for "platform", this becomes a nonbrainer.

                Chris TrottierA This user is from outside of this forum
                Chris TrottierA This user is from outside of this forum
                Chris Trottier
                wrote on last edited by
                #7

                May Likes Toronto Totally. Which is why protecting human rights and human creations is model regulation. If a system enables deepfakes, copyright theft, psychological harm, or erases authorship, it shouldn’t be deployed.

                1 Reply Last reply
                0

                Reply
                • Reply as topic
                Log in to reply
                • Oldest to Newest
                • Newest to Oldest
                • Most Votes


                • Login

                • Login or register to search.
                Powered by NodeBB Contributors
                • First post
                  Last post