Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

Scheduled Pinned Locked Moved Uncategorized
74 Posts 42 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

    A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

    """
    Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
    """

    Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

    Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

    Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

    In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

    Why does any of this matter?

    Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

    How can we tip that scale out of the attackers' favor?

    By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

    - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

    - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

    - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

    - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

    I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

    Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

    Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

    🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

    Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

    favicon

    LGBTQIA.Space (lgbtqia.space)

    The Orange ThemeT This user is from outside of this forum
    The Orange ThemeT This user is from outside of this forum
    The Orange Theme
    wrote last edited by
    #4

    @alice I used to get my hair cut at a place that was just far enough way, and with enough traffic jams on the way each time, that I stopped going. It's not "far", by any means, but it was just on the cusp of being annoying. Once it became juuuust too much, I went somewhere closer.

    I think people underestimate how low the bar can be to prevent bad actors. Even the guy scripting his nonsense will hit an application form and immediately leave to find an open instance, most of the time.

    1 Reply Last reply
    0
    • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

      A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

      """
      Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
      """

      Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

      Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

      Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

      In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

      Why does any of this matter?

      Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

      How can we tip that scale out of the attackers' favor?

      By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

      - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

      - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

      - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

      - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

      I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

      Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

      Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

      🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

      Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

      favicon

      LGBTQIA.Space (lgbtqia.space)

      MarianneN This user is from outside of this forum
      MarianneN This user is from outside of this forum
      Marianne
      wrote last edited by
      #5

      @alice can recommend this piece on #PunchNazis by lovely Tauriq - had it bookmarked for years. https://www.theguardian.com/science/brain-flapping/2017/jan/31/the-punch-a-nazi-meme-what-are-the-ethics-of-punching-nazis

      1 Reply Last reply
      0
      • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

        A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

        """
        Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
        """

        Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

        Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

        Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

        In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

        Why does any of this matter?

        Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

        How can we tip that scale out of the attackers' favor?

        By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

        - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

        - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

        - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

        - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

        I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

        Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

        Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

        🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

        Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

        favicon

        LGBTQIA.Space (lgbtqia.space)

        kauerK This user is from outside of this forum
        kauerK This user is from outside of this forum
        kauer
        wrote last edited by
        #6

        @alice Tightly argued. Nice.

        To the concern that it might deter some new users, I would add "yes, but if the alternative is lots more evil arseholes, it's a minor downside - especially as it is really only a downside for lazy new users".

        We already have moderated follows at the user level; having moderated signups at the server level seems like a no-brainer.

        AndrocatA 1 Reply Last reply
        0
        • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

          A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

          """
          Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
          """

          Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

          Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

          Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

          In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

          Why does any of this matter?

          Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

          How can we tip that scale out of the attackers' favor?

          By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

          - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

          - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

          - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

          - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

          I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

          Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

          Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

          🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

          Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

          favicon

          LGBTQIA.Space (lgbtqia.space)

          AndrocatA This user is from outside of this forum
          AndrocatA This user is from outside of this forum
          Androcat
          wrote last edited by
          #7

          @alice

          This is also well-known from hacker circles.

          The absolute largest lump of malicious hackers only look for low-hanging fruit.

          Dedicated hackers looking to penetrate a well-composed org? Very rare. And completely different from the bulk. This is what red team sessions are for.

          1 Reply Last reply
          0
          • kauerK kauer

            @alice Tightly argued. Nice.

            To the concern that it might deter some new users, I would add "yes, but if the alternative is lots more evil arseholes, it's a minor downside - especially as it is really only a downside for lazy new users".

            We already have moderated follows at the user level; having moderated signups at the server level seems like a no-brainer.

            AndrocatA This user is from outside of this forum
            AndrocatA This user is from outside of this forum
            Androcat
            wrote last edited by
            #8

            @kauer @alice

            Yeah. Having a place infested with malicious dicks is also going to deter people.

            I am sure there is a sweet spot in optimizing between cumbersome defenses and being a trash pit.

            1 Reply Last reply
            0
            • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

              A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

              """
              Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
              """

              Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

              Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

              Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

              In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

              Why does any of this matter?

              Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

              How can we tip that scale out of the attackers' favor?

              By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

              - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

              - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

              - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

              - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

              I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

              Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

              Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

              🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

              Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

              favicon

              LGBTQIA.Space (lgbtqia.space)

              gkrnoursG This user is from outside of this forum
              gkrnoursG This user is from outside of this forum
              gkrnours
              wrote last edited by
              #9

              @alice how a single post post from a malicious actor can do much harm and that we have solution available right to prevent real harm are two great point

              1 Reply Last reply
              0
              • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                """
                Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                """

                Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                Why does any of this matter?

                Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                How can we tip that scale out of the attackers' favor?

                By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                favicon

                LGBTQIA.Space (lgbtqia.space)

                Dr Neenah Estrella-LunaS This user is from outside of this forum
                Dr Neenah Estrella-LunaS This user is from outside of this forum
                Dr Neenah Estrella-Luna
                wrote last edited by
                #10

                @alice "... communities are defined by who stays, not by how many come through the door."

                This is a beautiful line and apropo of many situations. I will be adding that to my book of really useful ideas.

                Thank you.

                1 Reply Last reply
                0
                • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                  A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                  """
                  Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                  """

                  Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                  Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                  Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                  In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                  Why does any of this matter?

                  Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                  How can we tip that scale out of the attackers' favor?

                  By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                  - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                  - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                  - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                  - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                  I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                  Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                  Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                  🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                  Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                  favicon

                  LGBTQIA.Space (lgbtqia.space)

                  Dave AlvaradoD This user is from outside of this forum
                  Dave AlvaradoD This user is from outside of this forum
                  Dave Alvarado
                  wrote last edited by
                  #11

                  @alice for the readers (I know Alice knows all this):

                  Perfection is not the enemy of the good. Any attempts to keep out the attackers are better than no attempts to keep out the attackers.

                  There's such a thing as defense-in-depth. You don't need your registration process to stop every attacker to get in the door. You have moderation tools. You have blacklists. You have defederation. You have TBS. Every attacker-stopper you add makes your instance safer.

                  Don't give up. Fight back.

                  1 Reply Last reply
                  0
                  • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                    A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                    """
                    Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                    """

                    Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                    Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                    Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                    In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                    Why does any of this matter?

                    Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                    How can we tip that scale out of the attackers' favor?

                    By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                    - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                    - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                    - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                    - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                    I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                    Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                    Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                    🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                    Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                    favicon

                    LGBTQIA.Space (lgbtqia.space)

                    MayaMayaMayaI This user is from outside of this forum
                    MayaMayaMayaI This user is from outside of this forum
                    MayaMayaMaya
                    wrote last edited by
                    #12

                    @alice it's very worth repeating, but wow it's frustrating going over this again and again. Nothing about what you're saying has changed in the past 30-odd years since sites with easy signup became a thing, but then every new platform does the same thing of having wide open signups from the start and is "totally shocked" when that leads to the same problems that have been happening for decades.

                    1 Reply Last reply
                    0
                    • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                      A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                      """
                      Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                      """

                      Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                      Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                      Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                      In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                      Why does any of this matter?

                      Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                      How can we tip that scale out of the attackers' favor?

                      By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                      - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                      - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                      - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                      - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                      I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                      Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                      Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                      🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                      Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                      favicon

                      LGBTQIA.Space (lgbtqia.space)

                      Kim Possible :kimoji_fire:K This user is from outside of this forum
                      Kim Possible :kimoji_fire:K This user is from outside of this forum
                      Kim Possible :kimoji_fire:
                      wrote last edited by
                      #13

                      @alice The harm caused when attackers are not screened never really goes away. I got attacked by Nazis on Twitter. I still feel it.

                      Just Tom... 🐁T 🅰🅻🅸🅲🅴  (🌈🦄)A Insecurity Princess 🌈💖🔥S L 4 Replies Last reply
                      0
                      • Kim Possible :kimoji_fire:K Kim Possible :kimoji_fire:

                        @alice The harm caused when attackers are not screened never really goes away. I got attacked by Nazis on Twitter. I still feel it.

                        Just Tom... 🐁T This user is from outside of this forum
                        Just Tom... 🐁T This user is from outside of this forum
                        Just Tom... 🐁
                        wrote last edited by
                        #14

                        @kimlockhartga @alice I hated (absolutely hated) the "Sticks and stones" stuff at school, knowing full well the damage that words can do. The pen might be mightier than the sword, but the damage from a comment can last just as long, if not longer, and it cuts deep.

                        WolfW Kim Possible :kimoji_fire:K 2 Replies Last reply
                        0
                        • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                          A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                          """
                          Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                          """

                          Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                          Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                          Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                          In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                          Why does any of this matter?

                          Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                          How can we tip that scale out of the attackers' favor?

                          By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                          - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                          - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                          - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                          - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                          I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                          Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                          Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                          🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                          Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                          favicon

                          LGBTQIA.Space (lgbtqia.space)

                          ZumbadorZ This user is from outside of this forum
                          ZumbadorZ This user is from outside of this forum
                          Zumbador
                          wrote last edited by
                          #15

                          @alice the more servers have moderated registration, the less friction it will cause, as it becomes just a normal, expected part of signing up.

                          C++ Wage SlaveC 1 Reply Last reply
                          0
                          • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                            A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                            """
                            Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                            """

                            Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                            Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                            Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                            In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                            Why does any of this matter?

                            Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                            How can we tip that scale out of the attackers' favor?

                            By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                            - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                            - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                            - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                            - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                            I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                            Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                            Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                            🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                            Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                            favicon

                            LGBTQIA.Space (lgbtqia.space)

                            TwotiredT This user is from outside of this forum
                            TwotiredT This user is from outside of this forum
                            Twotired
                            wrote last edited by
                            #16

                            @alice This argument hits the Mark.

                            🅰🅻🅸🅲🅴  (🌈🦄)A 1 Reply Last reply
                            0
                            • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                              A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                              """
                              Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                              """

                              Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                              Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                              Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                              In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                              Why does any of this matter?

                              Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                              How can we tip that scale out of the attackers' favor?

                              By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                              - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                              - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                              - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                              - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                              I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                              Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                              Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                              🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                              Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                              favicon

                              LGBTQIA.Space (lgbtqia.space)

                              soloS This user is from outside of this forum
                              soloS This user is from outside of this forum
                              solo
                              wrote last edited by
                              #17

                              @alice

                              Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.

                              on its face this is just an awful argument, like ???

                              99.9% of nazis won't even bother doing that... so it weeds out the vast majority of them

                              and that's what you have other moderation practices for!!

                              1 Reply Last reply
                              0
                              • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                                A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                                """
                                Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                                """

                                Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                                Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                                Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                                In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                                Why does any of this matter?

                                Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                                How can we tip that scale out of the attackers' favor?

                                By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                                - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                                - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                                - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                                - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                                I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                                Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                                Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                                🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                                Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                                favicon

                                LGBTQIA.Space (lgbtqia.space)

                                BillW This user is from outside of this forum
                                BillW This user is from outside of this forum
                                Bill
                                wrote last edited by
                                #18

                                @alice

                                In one of the comments, I read defederation as defenestration

                                🅰🅻🅸🅲🅴  (🌈🦄)A DamonHDD 2 Replies Last reply
                                0
                                • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

                                  A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

                                  """
                                  Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
                                  """

                                  Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

                                  Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

                                  Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

                                  In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

                                  Why does any of this matter?

                                  Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

                                  How can we tip that scale out of the attackers' favor?

                                  By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

                                  - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

                                  - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

                                  - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

                                  - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

                                  I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

                                  Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

                                  Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

                                  🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)

                                  Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up. Sure, it might get a little rowdy, but they have a bouncer, and you've never seen things getting out of hand. So, you're busy dancing when a new guy walks in wearing a "I Hate Mark" shirt and promptly sucker-punches Mark. You didn't see it happen, but Mark is upset and tells the bouncer, who kicks the guy out. A few minutes later, the same guy walks back in and sucker-punches Mark again. Same result. Some people in the club say they'll tell the bouncer if they see him come in again. Mark wants to leave, but you tell him it's not that bad—after all, you've never been punched, and you didn't see Mark get punched, so maybe he's just being sensitive. A different guy walks in wearing a "I Plan On Punching Mark" shirt. No one tells the bouncer, because they've never seen *this* guy punch Mark. He sucker-punches Mark. At this point, Mark is pissed and yelling about being punched. The club members talk about putting up a "No Punching Mark" sign, but the owner is worried it'll hurt his club's growth. Another Mark in the club proposes they turn away anyone wearing an anti-Mark shirt or espousing anti-Mark rhetoric at the door, but this gets shot down for the same reason as the sign idea—then someone sucker-punches him. By the end of the night, your friend Mark is beat to fuck and says he'll never come to this club again. In fact, he's going to tell anyone named Mark to stay clear of this place. The next time you go to the club, half the folx there are wearing "I Kill Marks" shirts, but there aren't any Marks there, so it doesn't come up. I've been sucker-punched every day, for the last three days in a row by some of the most vile hate-speech and imagery. The accounts are using open registration servers and signing up with variations on the username "heilhitler1488". I fully expect it'll continue as long as we have open registration servers. And no, username pattern blocking alone won't fix this, it'll help a little, but mostly it'll just make them wear a different shirt while they sucker-punch us. #OpenRegistrationHurts

                                  favicon

                                  LGBTQIA.Space (lgbtqia.space)

                                  MaryMarasKittenBakeryM This user is from outside of this forum
                                  MaryMarasKittenBakeryM This user is from outside of this forum
                                  MaryMarasKittenBakery
                                  wrote last edited by
                                  #19

                                  @alice
                                  Much love for all of your efforts and those of all moderators, you make this place what it is
                                  🥰🥰

                                  🅰🅻🅸🅲🅴  (🌈🦄)A 1 Reply Last reply
                                  0
                                  • Just Tom... 🐁T Just Tom... 🐁

                                    @kimlockhartga @alice I hated (absolutely hated) the "Sticks and stones" stuff at school, knowing full well the damage that words can do. The pen might be mightier than the sword, but the damage from a comment can last just as long, if not longer, and it cuts deep.

                                    WolfW This user is from outside of this forum
                                    WolfW This user is from outside of this forum
                                    Wolf
                                    wrote last edited by
                                    #20

                                    @tompearce49 Kim Possible :kimoji_fire: @alice

                                    I dunno man. You ever stab a kid with a Bic pen in the hand for grabbing you and shoving you down into a chair? Because I did once. And the bullies never fucked with me again.

                                    eestileib (she/hers)E 🅰🅻🅸🅲🅴  (🌈🦄)A The Orange ThemeT 3 Replies Last reply
                                    0
                                    • WolfW Wolf

                                      @tompearce49 Kim Possible :kimoji_fire: @alice

                                      I dunno man. You ever stab a kid with a Bic pen in the hand for grabbing you and shoving you down into a chair? Because I did once. And the bullies never fucked with me again.

                                      eestileib (she/hers)E This user is from outside of this forum
                                      eestileib (she/hers)E This user is from outside of this forum
                                      eestileib (she/hers)
                                      wrote last edited by
                                      #21

                                      @wolfinpdx @tompearce49 @alice

                                      My older brother never stopped, but the school bullies did when I fought back (ludicrously, pathetically).

                                      WolfW 1 Reply Last reply
                                      0
                                      • Kim Possible :kimoji_fire:K Kim Possible :kimoji_fire:

                                        @alice The harm caused when attackers are not screened never really goes away. I got attacked by Nazis on Twitter. I still feel it.

                                        🅰🅻🅸🅲🅴  (🌈🦄)A This user is from outside of this forum
                                        🅰🅻🅸🅲🅴  (🌈🦄)A This user is from outside of this forum
                                        🅰🅻🅸🅲🅴 (🌈🦄)
                                        wrote last edited by
                                        #22

                                        @kimlockhartga I've been tempted to start collecting the attacks I get and publishing them (with content warnings!) because a thing I hear over and over is:

                                        "Really? I never see stuff like that here."

                                        And these (mostly) white (mostly) guys were saying the same thing when #BlackMastodon talks about #Racism.

                                        Or when #FemmeFedi talks about #Sexism.

                                        It's like, dude, you don't see it because you're not the target. 😮‍💨

                                        jz.tuskJ sanpanS Negative12DollarBillN All Critter. No Content. 🐚X Jamey SharpJ 5 Replies Last reply
                                        0
                                        • TwotiredT Twotired

                                          @alice This argument hits the Mark.

                                          🅰🅻🅸🅲🅴  (🌈🦄)A This user is from outside of this forum
                                          🅰🅻🅸🅲🅴  (🌈🦄)A This user is from outside of this forum
                                          🅰🅻🅸🅲🅴 (🌈🦄)
                                          wrote last edited by
                                          #23

                                          @Twotired oh no! Poor Mark 🥺

                                          TwotiredT 1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post