Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. A few days ago, a client’s data center "vanished" overnight.

A few days ago, a client’s data center "vanished" overnight.

Scheduled Pinned Locked Moved Uncategorized
sysadminhorrorstoriesithorrorstoriesmonitoring
46 Posts 17 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Stefano MarinelliS Stefano Marinelli

    A few days ago, a client’s data center "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

    I then suspected a power failure, but the UPS should have sent an alert.

    The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

    To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

    The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

    That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

    The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

    The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

    Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

    Never rely only on internal monitoring. Never.

    #IT #SysAdmin #HorrorStories #ITHorrorStories #Monitoring

    Elena ``of Valhalla''V This user is from outside of this forum
    Elena ``of Valhalla''V This user is from outside of this forum
    Elena ``of Valhalla''
    wrote last edited by
    #35
    @stefano feeling of :xkcd:`705` intensifies 😄
    Stefano MarinelliS 1 Reply Last reply
    0
    • Elena ``of Valhalla''V Elena ``of Valhalla''
      @stefano feeling of :xkcd:`705` intensifies 😄
      Stefano MarinelliS This user is from outside of this forum
      Stefano MarinelliS This user is from outside of this forum
      Stefano Marinelli
      wrote last edited by
      #36

      @valhalla totally!

      1 Reply Last reply
      0
      • James SewardJ James Seward

        @rhoot @stefano I have my cronjob scripts touch a file as their final action and my monitoring stuff alarms if the file is too old

        Rihards OlupsR This user is from outside of this forum
        Rihards OlupsR This user is from outside of this forum
        Rihards Olups
        wrote last edited by
        #37

        @jamesoff @rhoot @stefano When I managed such things in the past, I had the backup script use zabbix_sender to send a value to Zabbix and then alert if that is missing, like you just said.

        But after one incident I also added monitoring of backup size and alerting if it changes by > 10% from the previous.

        If backup starts getting failed DB dumps, it's good to know early that "hey, backups just dropped in size by 90%" 🙂

        Also, if a backup suddenly grows a lot, something's weird.

        James SewardJ 1 Reply Last reply
        0
        • Rihards OlupsR Rihards Olups

          @jamesoff @rhoot @stefano When I managed such things in the past, I had the backup script use zabbix_sender to send a value to Zabbix and then alert if that is missing, like you just said.

          But after one incident I also added monitoring of backup size and alerting if it changes by > 10% from the previous.

          If backup starts getting failed DB dumps, it's good to know early that "hey, backups just dropped in size by 90%" 🙂

          Also, if a backup suddenly grows a lot, something's weird.

          James SewardJ This user is from outside of this forum
          James SewardJ This user is from outside of this forum
          James Seward
          wrote last edited by
          #38

          @richlv @rhoot @stefano I also do this 🙂

          (https://simplemonitor.readthedocs.io/en/latest/monitors/filestat.html)

          1 Reply Last reply
          0
          • Stefano MarinelliS This user is from outside of this forum
            Stefano MarinelliS This user is from outside of this forum
            Stefano Marinelli
            wrote last edited by
            #39

            @luca @valhalla those are terrible! 😆

            1 Reply Last reply
            0
            • Stefano MarinelliS Stefano Marinelli

              A few days ago, a client’s data center "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

              I then suspected a power failure, but the UPS should have sent an alert.

              The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

              To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

              The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

              That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

              The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

              The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

              Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

              Never rely only on internal monitoring. Never.

              #IT #SysAdmin #HorrorStories #ITHorrorStories #Monitoring

              WulfyN This user is from outside of this forum
              WulfyN This user is from outside of this forum
              Wulfy
              wrote last edited by
              #40

              @stefano

              You are the hero I aspire to be!

              Stefano MarinelliS 1 Reply Last reply
              0
              • WulfyN Wulfy

                @stefano

                You are the hero I aspire to be!

                Stefano MarinelliS This user is from outside of this forum
                Stefano MarinelliS This user is from outside of this forum
                Stefano Marinelli
                wrote last edited by
                #41

                @n_dimension ahah thank you, but I'm not a hero. I'm just doing my job anche checking the alerts.

                1 Reply Last reply
                0
                • Stefano MarinelliS Stefano Marinelli

                  A few days ago, a client’s data center "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

                  I then suspected a power failure, but the UPS should have sent an alert.

                  The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

                  To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

                  The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

                  That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

                  The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

                  The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

                  Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

                  Never rely only on internal monitoring. Never.

                  #IT #SysAdmin #HorrorStories #ITHorrorStories #Monitoring

                  KevA This user is from outside of this forum
                  KevA This user is from outside of this forum
                  Kev
                  wrote last edited by
                  #42

                  @stefano Uptime Kuma instance from waaaaay downtown!!!

                  1 Reply Last reply
                  0
                  • Stefano MarinelliS Stefano Marinelli

                    A few days ago, a client’s data center "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

                    I then suspected a power failure, but the UPS should have sent an alert.

                    The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

                    To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

                    The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

                    That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

                    The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

                    The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

                    Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

                    Never rely only on internal monitoring. Never.

                    #IT #SysAdmin #HorrorStories #ITHorrorStories #Monitoring

                    Bojan LandekićB This user is from outside of this forum
                    Bojan LandekićB This user is from outside of this forum
                    Bojan Landekić
                    wrote last edited by
                    #43

                    @stefano so refreshing to read a quality tech tale on Mastodon. Thanks for sharing!

                    1 Reply Last reply
                    0
                    • Stefano MarinelliS Stefano Marinelli

                      A few days ago, a client’s data center "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

                      I then suspected a power failure, but the UPS should have sent an alert.

                      The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

                      To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

                      The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

                      That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

                      The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

                      The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

                      Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

                      Never rely only on internal monitoring. Never.

                      #IT #SysAdmin #HorrorStories #ITHorrorStories #Monitoring

                      Ian CampbellN This user is from outside of this forum
                      Ian CampbellN This user is from outside of this forum
                      Ian Campbell
                      wrote last edited by
                      #44

                      @stefano This is such a good, if niche, example of "paying attention to the fundamentals and the alerts covers all sorts of things you'd never imagine happening."

                      Thanks for sharing.

                      1 Reply Last reply
                      0
                      • EnigmaRotorE EnigmaRotor

                        @stefano Stefano Jones P.A. a very noir series.

                        Elena Rossini ⁂_ This user is from outside of this forum
                        Elena Rossini ⁂_ This user is from outside of this forum
                        Elena Rossini ⁂
                        wrote last edited by
                        #45

                        @EnigmaRotor reading this at lunch in a cafe near my house and I keep chuckling and smiling from ear to ear. @stefano is such a treasure 🙌🏆

                        1 Reply Last reply
                        0
                        • Stefano MarinelliS Stefano Marinelli

                          A few days ago, a client’s data center "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

                          I then suspected a power failure, but the UPS should have sent an alert.

                          The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

                          To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

                          The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

                          That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

                          The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

                          The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

                          Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

                          Never rely only on internal monitoring. Never.

                          #IT #SysAdmin #HorrorStories #ITHorrorStories #Monitoring

                          Elena Rossini ⁂_ This user is from outside of this forum
                          Elena Rossini ⁂_ This user is from outside of this forum
                          Elena Rossini ⁂
                          wrote last edited by
                          #46

                          @stefano you’re a hero Stefano! As your Fedi friend and documentary filmmaker I hope I get preferential treatment when one of your amazing stories gets optioned for a film 🤗

                          1 Reply Last reply
                          0
                          • stux :stux_santa:S stux :stux_santa: shared this topic

                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          Powered by NodeBB Contributors
                          • First post
                            Last post