Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. Would like to propose "Fractal Moral Hazard" as a descriptor of generative AI:

Would like to propose "Fractal Moral Hazard" as a descriptor of generative AI:

Scheduled Pinned Locked Moved Uncategorized
9 Posts 4 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Hanno ZullaH This user is from outside of this forum
    Hanno ZullaH This user is from outside of this forum
    Hanno Zulla
    wrote on last edited by
    #1

    Would like to propose "Fractal Moral Hazard" as a descriptor of generative AI:

    No matter which part and at what detail level you choose to look at the technology, you will discover a morally questionable or repulsive aspect of it.

    Open RiskO 1 Reply Last reply
    1
    0
    • Hanno ZullaH Hanno Zulla

      Would like to propose "Fractal Moral Hazard" as a descriptor of generative AI:

      No matter which part and at what detail level you choose to look at the technology, you will discover a morally questionable or repulsive aspect of it.

      Open RiskO This user is from outside of this forum
      Open RiskO This user is from outside of this forum
      Open Risk
      wrote on last edited by
      #2

      @hzulla It doesnt quite capture the nature of the beast. If you zoom to the core, as you would have to do with a real fractal, its just a generative statistical model applied to tokenized language, bitmaps etc. It is no more intrinsically wrong or immoral than logistic regression.

      It is, alas, society itself that is the problem. Layer upon layer of malfunctioning patterns: captured politicians, greedy investors, digitally illiterate public etc. Such a society can produce monsters from anything.

      Open RiskO Johannes LinkJ 2 Replies Last reply
      0
      • Open RiskO Open Risk

        @hzulla It doesnt quite capture the nature of the beast. If you zoom to the core, as you would have to do with a real fractal, its just a generative statistical model applied to tokenized language, bitmaps etc. It is no more intrinsically wrong or immoral than logistic regression.

        It is, alas, society itself that is the problem. Layer upon layer of malfunctioning patterns: captured politicians, greedy investors, digitally illiterate public etc. Such a society can produce monsters from anything.

        Open RiskO This user is from outside of this forum
        Open RiskO This user is from outside of this forum
        Open Risk
        wrote on last edited by
        #3

        @hzulla the silver lining is that a better society could pick the same raw ingredients (chips, networks, programming abstractions, mathematical algorithms etc.) and do something amazing with them.

        Its not utopic: linux, wikipedia, openstreetmap and so many other great things have happened (including this fediverse). Its just that digital tech confers superpowers and the positive forces in society were slower to recognize it than the dark side.

        1 Reply Last reply
        0
        • Open RiskO Open Risk

          @hzulla It doesnt quite capture the nature of the beast. If you zoom to the core, as you would have to do with a real fractal, its just a generative statistical model applied to tokenized language, bitmaps etc. It is no more intrinsically wrong or immoral than logistic regression.

          It is, alas, society itself that is the problem. Layer upon layer of malfunctioning patterns: captured politicians, greedy investors, digitally illiterate public etc. Such a society can produce monsters from anything.

          Johannes LinkJ This user is from outside of this forum
          Johannes LinkJ This user is from outside of this forum
          Johannes Link
          wrote on last edited by
          #4

          @openrisk @hzulla „Technology itself is neutral“ is a strawman argument because relevant technology is always used by someone, in a concrete context and with a concrete goal in mind. You can argue that a statistical generator itself is neutral, but that is purely apologetic when all the relevant uses are bad.
          The first goal must always be: Stop the current evil usage! And only then it’s worthwhile looking at potentially „good“ use cases.

          Open RiskO i suggest guillotines (mal)M 2 Replies Last reply
          0
          • Johannes LinkJ Johannes Link

            @openrisk @hzulla „Technology itself is neutral“ is a strawman argument because relevant technology is always used by someone, in a concrete context and with a concrete goal in mind. You can argue that a statistical generator itself is neutral, but that is purely apologetic when all the relevant uses are bad.
            The first goal must always be: Stop the current evil usage! And only then it’s worthwhile looking at potentially „good“ use cases.

            Open RiskO This user is from outside of this forum
            Open RiskO This user is from outside of this forum
            Open Risk
            wrote on last edited by
            #5

            @jlink @hzulla The thesis we are arguing is that "AI" is an immoral fractal all the way down. My argument is that at the bottom of it, its simply statistical processing of data, something that is happening already in some form or another for more of a century and in many sectors. Some of these uses are good, some are bad and they are never uncontroversial (they are just models). To treat "AI" as something else simply misidentifies what are the real problems (which I indicated).

            1 Reply Last reply
            0
            • Johannes LinkJ Johannes Link

              @openrisk @hzulla „Technology itself is neutral“ is a strawman argument because relevant technology is always used by someone, in a concrete context and with a concrete goal in mind. You can argue that a statistical generator itself is neutral, but that is purely apologetic when all the relevant uses are bad.
              The first goal must always be: Stop the current evil usage! And only then it’s worthwhile looking at potentially „good“ use cases.

              i suggest guillotines (mal)M This user is from outside of this forum
              i suggest guillotines (mal)M This user is from outside of this forum
              i suggest guillotines (mal)
              wrote on last edited by
              #6

              @jlink @openrisk @hzulla

              it's also just plain false. a gun isn't "neutral", it's made to kill people and that's all it does.

              "but there are valid uses for an LLM" yeah and there are valid uses for gunpowder, but we're not talking about the general concept of gunpowder, we're talking about a fucking gun.

              the LLMs deployed in the current hype craze are made to consume a nation's worth of fossil fuels to give plausible-sounding responses to general input so a CEO can feel like his dick is big enough. and that's all they do.

              Open RiskO 1 Reply Last reply
              0
              • i suggest guillotines (mal)M i suggest guillotines (mal)

                @jlink @openrisk @hzulla

                it's also just plain false. a gun isn't "neutral", it's made to kill people and that's all it does.

                "but there are valid uses for an LLM" yeah and there are valid uses for gunpowder, but we're not talking about the general concept of gunpowder, we're talking about a fucking gun.

                the LLMs deployed in the current hype craze are made to consume a nation's worth of fossil fuels to give plausible-sounding responses to general input so a CEO can feel like his dick is big enough. and that's all they do.

                Open RiskO This user is from outside of this forum
                Open RiskO This user is from outside of this forum
                Open Risk
                wrote on last edited by
                #7

                @malicethegray @jlink @hzulla well I could also argue its an apoplectic reaction but mastodon encourages me to be calmer.

                In that metaphor, are LLM's guns or are they knives?

                If we accept at all the use of statistical models in society, they are just the latest generation of knives and it all boils down to the context of their production and use. Which is *highly* amenable and not intrinsic.

                Open RiskO 1 Reply Last reply
                0
                • Open RiskO Open Risk

                  @malicethegray @jlink @hzulla well I could also argue its an apoplectic reaction but mastodon encourages me to be calmer.

                  In that metaphor, are LLM's guns or are they knives?

                  If we accept at all the use of statistical models in society, they are just the latest generation of knives and it all boils down to the context of their production and use. Which is *highly* amenable and not intrinsic.

                  Open RiskO This user is from outside of this forum
                  Open RiskO This user is from outside of this forum
                  Open Risk
                  wrote on last edited by
                  #8

                  @malicethegray @jlink @hzulla

                  Reversing the dystopic use of digital technology will require quite a bit more than stylized demonstrations of anger.

                  The challenge goes very deep and touches every aspect of digital tech: Are mobile devices guns or knives? Should we ban them because currently they are guns in the exclusive service of surveillance adtech? Well, in the scheme of things it would take literally nothing to liberate people from surveillance if there was a widely usable FOSS mobile.

                  i suggest guillotines (mal)M 1 Reply Last reply
                  0
                  • Open RiskO Open Risk

                    @malicethegray @jlink @hzulla

                    Reversing the dystopic use of digital technology will require quite a bit more than stylized demonstrations of anger.

                    The challenge goes very deep and touches every aspect of digital tech: Are mobile devices guns or knives? Should we ban them because currently they are guns in the exclusive service of surveillance adtech? Well, in the scheme of things it would take literally nothing to liberate people from surveillance if there was a widely usable FOSS mobile.

                    i suggest guillotines (mal)M This user is from outside of this forum
                    i suggest guillotines (mal)M This user is from outside of this forum
                    i suggest guillotines (mal)
                    wrote on last edited by
                    #9

                    @openrisk you're a real fucking prick you know that

                    1 Reply Last reply
                    0
                    • Pteryx the Puzzle SecretaryP Pteryx the Puzzle Secretary shared this topic on

                    Reply
                    • Reply as topic
                    Log in to reply
                    • Oldest to Newest
                    • Newest to Oldest
                    • Most Votes


                    • Login

                    • Login or register to search.
                    Powered by NodeBB Contributors
                    • First post
                      Last post