Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. Conviently, 2 of my posts today dovetail nicely.

Conviently, 2 of my posts today dovetail nicely.

Scheduled Pinned Locked Moved Uncategorized
2 Posts 2 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • StrypeyS This user is from outside of this forum
    StrypeyS This user is from outside of this forum
    Strypey
    wrote on last edited by
    #1

    Conviently, 2 of my posts today dovetail nicely. This post ...

    Strypey (@strypey@mastodon.nzoss.nz)

    If you don't believe that ongoing interaction with proprietary digital systems has any power to shift people's thinking in unhealthy ways, you're not paying attention; https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis Which would be very unusual for Cory, so I'm puzzled as to why he keeps pushing this talking point. Maybe it's his case for why people need to read Chokepoint Capitalism or Enshittification if they've already read Zuboff's book? They do, but there are better ways to make that argument. (2/2) #AI #MOLE

    favicon

    Mastodon - NZOSS (mastodon.nzoss.nz)

    ... linked to a Psychology Today article about AI psychosis, which says;

    "This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals.*

    Link Preview Image
    The Emerging Problem of "AI Psychosis"

    AI may be fueling psychotic delusions in a phenomenon known as "AI psychosis" or "ChatGPT psychosis." New research explains the risks.

    favicon

    Psychology Today (www.psychologytoday.com)

    (1/2)

    Douglas Edwards πŸ‡ΊπŸ‡¦πŸ‡¨πŸ‡¦πŸ‡²πŸ‡½πŸ‡΅πŸ‡¦πŸ‡¬πŸ‡±πŸ‡©πŸ‡°πŸ‡ͺπŸ‡ΊD 1 Reply Last reply
    0
    • StrypeyS Strypey

      Conviently, 2 of my posts today dovetail nicely. This post ...

      Strypey (@strypey@mastodon.nzoss.nz)

      If you don't believe that ongoing interaction with proprietary digital systems has any power to shift people's thinking in unhealthy ways, you're not paying attention; https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis Which would be very unusual for Cory, so I'm puzzled as to why he keeps pushing this talking point. Maybe it's his case for why people need to read Chokepoint Capitalism or Enshittification if they've already read Zuboff's book? They do, but there are better ways to make that argument. (2/2) #AI #MOLE

      favicon

      Mastodon - NZOSS (mastodon.nzoss.nz)

      ... linked to a Psychology Today article about AI psychosis, which says;

      "This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals.*

      Link Preview Image
      The Emerging Problem of "AI Psychosis"

      AI may be fueling psychotic delusions in a phenomenon known as "AI psychosis" or "ChatGPT psychosis." New research explains the risks.

      favicon

      Psychology Today (www.psychologytoday.com)

      (1/2)

      Douglas Edwards πŸ‡ΊπŸ‡¦πŸ‡¨πŸ‡¦πŸ‡²πŸ‡½πŸ‡΅πŸ‡¦πŸ‡¬πŸ‡±πŸ‡©πŸ‡°πŸ‡ͺπŸ‡ΊD This user is from outside of this forum
      Douglas Edwards πŸ‡ΊπŸ‡¦πŸ‡¨πŸ‡¦πŸ‡²πŸ‡½πŸ‡΅πŸ‡¦πŸ‡¬πŸ‡±πŸ‡©πŸ‡°πŸ‡ͺπŸ‡ΊD This user is from outside of this forum
      Douglas Edwards πŸ‡ΊπŸ‡¦πŸ‡¨πŸ‡¦πŸ‡²πŸ‡½πŸ‡΅πŸ‡¦πŸ‡¬πŸ‡±πŸ‡©πŸ‡°πŸ‡ͺπŸ‡Ί
      wrote on last edited by
      #2

      @strypey Iatrogenic symptom amplification, particularly for depressive (including suicidal[!]) symptoms, is a well-known problem in HUMAN psychotherapy. It was a serious issue with Rogerian "nondirective" therapy, in which the therapist frequently repeats back paraphrased versions of the patient's own words. This was intended to facilitate the patient's own problem-solving, but in practice depressed patients tended to interpret it as agreement with, and validation of, their self-disparaging thoughts, leading to a vicious cycle of more and more pessimistic thinking. The same thing can easily happen with AI therapy.

      1 Reply Last reply
      1
      0
      • DThorisD DThoris shared this topic on

      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Login or register to search.
      Powered by NodeBB Contributors
      • First post
        Last post