10

Ask HN: Best “I brought down production” story?

 4 years ago
source link: https://news.ycombinator.com/item?id=27644387
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Ask HN: Best “I brought down production” story?
Back in the days of MyISAM and before Google had their own ad network I worked for the world's largest advertising network. It had a global reach of 75%, meaning 3 / 4s of people saw at least one of our ads daily.

I was trying to learn MySQL and the CTO made the mistake of giving me access to the prod database. This huge network that served most of the ads in the world ran off of only two huge servers running in an office outside Los Angeles.

MyISAM uses a read lock on every SELECT query. I did not know this at the time. I was running a number of queries that were trying to pull historical performance data for all our ads across all time. They were taking a long time so I let them run in the background while working on a spreadsheet somewhere else.

A little while later I hear some murmuring. Apparently the whole network was down. The engineering team was frantically trying to find the cause of the problem. Eventually, the CTO approaches my desk. "Were you running some queries on the database?" "Yes." "The query you ran was trying to generate billions of rows of results and locked up the entire database. Roughly three quarters of the ads in the world have been gone for almost two hours."

After the second time I did this, he showed me the MySQL EXPLAIN command and I finally twigged that some kinds of JOINs can go exponential.

Kudos to him for never revoking my access and letting me learn things the hard way. Also, if he worked for me I would have fired him.

s.gif
I’m confused by the last part of your post.

Sounds like you appreciated that your boss gave you space to learn, and understood that you made an honest mistake, but you’d fire someone who made this mistake if they were working for you?

How do you square those two things internally?

s.gif
It's not good to punish people for making mistakes in the course of their work (especially if that work is meant to be educational)

It is good to punish people who give access to production databases to people who shouldn't have it. And the guy learning MySQL should not be given that access.

Taking down prod is always a symptom of a systemic failure. The person responsible for the systemic failure should see the consequences, not the person responsible for the symptom.

s.gif
Wait, but if you choose the guy giving access as the mistake made in the course of his work, then what's the new plan?
s.gif
It's a poetic way of saying the boss was/is a better person than the OP.
Not me, but a colleague - he wanted to look around the system as the `uwsgi` user, so he ran `sudo -u wsgi -s /bin/bash`.

Except that he typoed, and instead ran `sudo -c wsgi -s /bin/bash`. What that does is instead of launching the (-s)hell as the uwsgi (-u)ser, it interprets the rest as a (-c)ommand. Now, `wsgi` is also a binary, and unfortunately, it does support a `-s` switch. It tries to open a socket at that address - or a filesystem path, as the case may be. Meaning that the command (under root) overwrote /bin/bash with 0 bytes.

Within minutes, jobs started failing, the machine couldn't be SSH'd into, but funnily enough, as /bin/bash was the login shell for all users, not even logging in via a tty through KVM worked.

Perhaps not the best story, but certainly a fun way to blow your foot off on a Monday morning :)

s.gif
`ssh $host /bin/sh` (or another shell) should work?
s.gif
Logging into another shell would be attempted only if someone knew at the time why the logins were failing.

But thanks, I've just added another technique to my toolbox.

s.gif
On a Linux box isn’t that just a link to bash?
s.gif
Now I'm curious how you managed to recover. I only know enough of my way around a shell to be dangerous and I'd be SoL if I ended up in this situation.
s.gif
Recovery disk, then either copy the disk's copy of bash (if it doesn't depend on a later glibc version), copy another shell to /bin/bash (as the system probably doesn't depend on bash-specific commands to boot), chroot and use the package manager, or use the package manager with an explicit sysroot (e.g. pacman --sysroot). The first two steps are very easy compared to the latter two, but should be followed by a reinstallation of the package that provides bash.
s.gif
That's beautiful. I'm not sure I'd have had a clue what just happened even if it was me making the typo.
s.gif
Does the proc entry for a running process still link to the now-deleted file in that situation? If so, you might be able to save yourself from a running bash shell by doing a “cat /proc/$$/exe > /bin/bash”
s.gif
Probably not if it was overwritten (": >/bin/bash") rather than removed and recreated ("rm -f /bin/bash; : >/bin/bash"). The former will cause all processes to see the empty file, the latter would leave processes with access to the old contents.

In this case if you noticed and still had a shell, you could just copy another shell over ("cp /bin/sh /bin/bash"), to at least get back to probably able to login, until you could pull a copy from another machine or backups.

I was a system engineer at Amazon from 2001-2006. Sometime around 2004/2005 or so there was a development team working on the "a9 search engine" (meant to complete with google) down in SF. They were sort of an official "shadow IT" offshoot and asked for special treatment and they got me assigned specifically to them to build out the first of their two webservers.

They did the usual mistake of wanting to jettison all the developer tooling and start from scratch. So there was a special request to just install a base O/S, put accounts on the box, and setup a plain old apache webserver with a simple /var/www/index.html (this was well outside of how Amazon normally deployed webservers which was all customized apache builds and software deployment pipelines and had a completely different file system layout).

They didn't specify what was to go into the index.html files on the servers.

So I just put "FOO" in the index.html and validated that hitting them on port 80 produced "FOO".

Then I handed off the allocated IPs to the networking team to setup a single VIP in a loadbalancer that had these two behind it.

The network engineer brought up the VIP on a free public IP address, as asked.

What nobody know was that the IP had been a decomissioned IP for www.amazon.com from a year or two earlier when there was some great network renumbering project and it had pointing at a cluster of webservers on the old internal fabric.

The DNS loadbalancers were still configured for that IP address and they were still in the rotation for www.amazon.com. And all they did as a health check was pull GET / and look for 200s and then based on the speed of the returns they'd adjust their weighting.

They found that this VIP was incredibly well optimized for traffic and threw most all the new incoming requests over to these two webservers.

I learned of this when my officemate said "what is this sev1... users reporting 'foo' on the website..."

This is why you always keep it professional, kids...

A friend of mine ran a large and relatively popular (as in at least 30 users online at any given time ...) PvP MUD on a server of mine back in the (late?) 90s.

I didn't play muds and my experience was mostly limited to helping him fix C programming bugs from time to time and fielding an occasional irate phone call from users who got my number off the whois data. But because of the programming help I had some kind of god-access to the mud.

One afternoon I had a ladyfriend over that I was presumably trying to impress and she'd asked about the mud. We hopped on and I summoned into existence a harmless raggedy-ann doll. That was kind of boring so I thought it would be fun to attach an NPC script to it, -- I went through the list and saw something called a "Zombie Lord" which sounded promising. I applied it, and suddenly the doll started nattering on about the apocalypse and ran off. Turned out that it killed everyone it encountered, turned them all into zombie lords, creating an exponential wave of destruction that rapidly took over the over the whole game.

I found the mental image of some little doll running around bringing on the apocalypse to be just too funny-- until my phone started ringing. Ultimately the game state had to be reported to a backup from a day or two prior.

[I've posted a couple examples, -- I dunno which one is best, but people can vote. :)]

s.gif
Oh man. MUD's in the late 90s. Just randomly reminded me of all the time I spent in Medivia (I'm fairly sure that was the one). Those were good times. Surprised the name even came to me since I haven't thought about those in... 20ish years.
s.gif
That was an annoying time in the game. Not horrible, but you pretty much had to avoid town because people were purposefully trying to spread the plague. Certainly memorable though.
Can It be a story I was involved in but I didn't do it?

I used to work for a major university as a student systems admin. The only thing that was "student" about it was the pay-- I had a whole lab of Sun and SGI servers/desktops, including an INCREIDBLE 1TB of storage-- we had 7xSun A1000's (an array of arrays) if memory serves.

Our user directories were about 100GB at the time. I had sourced this special tape drive that could do that, but it was fidgety (which is not something you want in a backup drive admittedly). The backups worked, I'd say, 3/4ths of the time. I think the hardware was buggy, but the vendor could never figure it out. Also, before you lecture me, we were very constrained with finances, I couldn't just order something else.

So I graduated, and as such had to find a new admin. We interviewed two people, one was very sharp and wore black jeans and a black shirt-- it was obvious he couldn't afford a suit which would have been the correct thing to wear. The other candidate had suit, and he was punching below his weight. Over my objections, suit guy gets hired.

Friday night, my last day of employment I throw tapes into the machine and start a full L0 backup which would take all weekend to complete.

Monday morning I get a panicked phone calls from my former colleagues. "The new guy deleted the home directories!"

The suit guy literally, had in his first few hours destroyed the entire labs research. All of it. Anyways, I said something to the effect of, "Is the light on the AIT array green or amber?"

"Green."

"You're some lucky sons of bitches. I'll be down in an hour and we'll straighten it out."

s.gif
Hilarious story, thanks for sharing.

> "Is the light on the AIT array green or amber?"

Can you explain this? What is an AIT array?

s.gif
As the others have correctly intuited-- its a tape backup system. We couldn't afford a proper tape library with a robot. This thing was two Sony AIT tape machines that had a special SCSI board that made them look like one single drive to the host. I always assumed the fault was in that SCSI board. The hand-off between tape-1 and tape-2 was what usually failed. The problem might occur 24 hours into a backup so it was difficult to get good backups. Also, I was not a full time employee (being a student), so I couldn't babysit this thing 5 days a week like a full time employee.

Anyways, thank you for reading my silly story!

s.gif
I assume the 3/4-reliable tape drive?
s.gif
I take it deleting everything was an accident, right? Not that the guy applied for a job only to destroy that information... lol
s.gif
I do not actually know what happened. But something basically resulted in an rm -rf /home and was accidental.

By all reports, he eventually became a well liked and good admin. I just don't think he knew that much Unix when he started.

Back when I was working on proof of correctness, when that was a very new thing, I was using the Boyer-Moore theorem prover remotely on a large time-shared mainframe at SRI International. At the time, you needed a mainframe to run LISP. I was working on proofs of basic numeric functions for bounded arithmetic. So I was writing theorems with numbers such as 65536.

This caused the mainframe to run out of memory, page out to disk, and thrash, bringing other users to a crawl. It took a while to figure out why relatively simple theorems were doing this.

Boyer and Moore explained to me that the internal representation of numbers was exactly that of their constructive mathematics theory. 2 was (ADD1 (ADD1 (ZERO))). 65536 was a very long string of CONS cells. I was told that most of their theorems involved numbers like 1.

They went on to improve the number representation in their prover, after which it could prove useful theorems about bounded arithmetic.

(I still keep a copy of their prover around. It's on Github, at [1]. It's several thousand times faster on current hardware than it was on the DECSYSTEM 2060.)

[1] https://github.com/John-Nagle/nqthm

Long ago this one (close to 2000), but we were hosting some of our clients on machines in our office because it was (a lot) cheaper for our startup to do so. We had 3 rather large (for our company) clients running at the time in the server room in the office. The servers where hooked up to 2nd hand APCs so power failures went unnoticed if they were short. One Friday afternoon, we had some drinks (tgif) and a bunch of us (I was the cto....) were fooling around in one of the rooms throwing tennis balls; I threw one straight into the firealarm. This was just a glass with a button behind it: if you pressed the glass it would trigger for the entire (20 story) building. This cut the power and switched on the emergency lights. There was no fire obviously, but, for bureaucracy rules, the firebrigade had to pull up, inspect the building, we had to sign docs etc and then they switched off the alarm and on the power. Too late for our APCs: everything was down and (like said: long time ago for Linux etc) we had to run fsck and basically spent a large portion of the evening getting it all back. We moved to xs4all co location after that incident...
On Linux killall lets you killall all processes matching a name.

On Solaris killall kills all processes.

To make matters worse, I used the command on a sever with a hung console -- so it didn't apply immediately, but later in the middle of the day the console got unhung and the main database server went down.

Explaining that this was an earnest error and not something malicious to the PHBs was somewhat ... delicate. "So why did you kill all the processes?" "Because I didn't expect it to do that." "But the command name is kill all?" ...

s.gif
Genuine question: what's the usefulness of Solaris' default behavior? Why kill all processes?
s.gif
It's used in the Solaris init scripts for shutdown, shortly before the system tries to unmount all mounted disk volumes.

I made the same mistake nullc made once, as I was more accustomed to linux than solaris. That was after hours and the effect was immediate, but it was still a pretty jarring and memorable moment.

s.gif
I oversaw a Solaris machine for a very short period, and never got any reason to use it. But if you can change the signal, sending SIGHUP into everything looks like reasonable thing to do.

Still, it's not something common enough to deserve it's own program.

s.gif
Yep, and as weird as killall on solaris is-- the naming of killall on linux is kinda weird. It's "killbyname".
s.gif
HPUX also did the same if I recall. Had to be careful swapping back between it and Linux.
s.gif
I assume it’s part of the shutdown process. Makes sense in a way: kill -> killall.
s.gif
Sometimes I use kill -15 -1 on Linux to log me out, but obviously not as root.
This was 20 years ago now - it was my first day in a new job working for a startup.

Our startup was based in the garden office of a large house and the production server was situated in a cupboard in the same room.

The day I started was a cold January day and I’d had to cycle through flooded pathways to get to work that morning - so by the time I arrived my feet were soaked.

Once I’d settled down to a desk I asked if I could plug a heater in to dry my shoes. As we were in a garden office every socket was an extension cable so I plugged the heater in to the one under my desk.

A few minutes later I noticed that I couldn’t access the live site I’d been looking through - and others were noticing the same.

It turned out the heater I was using had popped the fuse on the socket. The extension I was using was plugged into the UPS used by the servers. So the battery had warmed my feet for a few minutes before shutting down and taking the servers down too.

And that’s how I brought production down within 3 hours of starting my first job in the web industry…

One Tuesday morning, my code brought down 67 restaurants piloting the new point-of-sale (POS) system.

I'd written the code to reformat the mainframe database of menu items, prices, etc, to the format used by the store systems. I hadn't accounted for the notion that the mainframe would run out of disk space. When the communications jobs ran, a flock of 0-byte files were downloaded to the stores. When the POS systems booted with their 0-byte files, they were... confused. As were the restaurant managers. As were the level 1, level 2, vendor, and executive teams back at headquarters. Once we figured it out, we re-spun the files, sent them out, and the stores were back in business. I added a disk space check, and have done much better with checking my return codes ever since.

Nothing crazy, but something I always laugh at.

I was so excited to meet a legit/professional dev team the first day of my career.

I was paired with a Sr dev and sat in his cubicle so he could show me some backend update to a prod app with 20K internal users... "normally I'd run this on the dev server, but, its quick & easy so Ill just do it on prod"

...watched him crash the whole thing & struggle the rest of the day to try and bring it up. I just sat there in awe, especially as everyone came running over and the emails poured in, while the Sr Dev casually brushed it all aside. He was more interested in explaining how the mobile game Ingress worked.

First time I got into the computer room back in the days when one unix mini-computer ran hundreds of terminals. I was asked to put a reel tape into the drive to load a tar archive into the Oracle database.

I couldn’t get the tape drive door open so I looked around and saw a key next to the door.

That didn’t open the door either. I was stood there scratching my head when the double doors burst open and half a dozen sysadmins came running in like a SWAT team.

I was a bit surprised until I glanced down and notice all the lights were off.

Yes the key that turned the power off had been left in the machine.

I was testing backups for a CMS; to test, I had to destroy the test database. I destroyed the production database. While my manager was querying the CMS to do my annual review.

Good news is I was already planning on restoring the test database from the production backup, so i had the database up in under 45 minutes (slower than it should have been because Oracle's docs were flat out wrong).

A more senior engineer told me he was impressed by how quickly I got things running again; apparently in the Bad Old Days (i.e. a year before I started) the database went down and, while everybody was pretty sure there were backups somewhere, nobody was sure where; customer interactions were tracked by pen and paper for almost 3 business days while this was figured out.

Six hours before a trans-Pacific flight on Friday afternoon my key production database started experiencing high latency (PostgreSQL on Heroku). Had recently installed an add-on to sync data from the DB to Fivetran. Three most experienced engineers including myself paired to remove the add-on to ensure the weekend was drama-free, and instead deleted the entire database - as a result of a UX issue within the Heroku console.

Recovery took 30 minutes (Through Heroku support as Heroku did not allow backups-via-replication outside their own backup system), but that was a very long 30 minutes.

Second worst was a cleanup system that removed old CloudFormation stacks automatically by only retaining the latest version of a specific stack. Deployed a canary version of an edge (Nginx+Varnish) stack for internal testing. Cleanup script helpfully removed the production stack entirely.

2 days before I got married, I dropped the production database by accident from a GUI tool where “right-clicking” can be destructive if you click to fast. The application scheduled radio and television commercials and within 48 US states for a large international advertising group. The bigger problem was that the DBA had only been doing incremental backups and didn’t have a full back against which to run the incremental backups. He had never created a full backup.

Fortunately, given the nature of media buys in that time, all placements were printed and faxed. My team sent me to my wedding rehearsal dinner and spent the next two days collecting printed orders and re-keying them into the system.

I am forever grateful to that team.

s.gif
Backups aren't important, restores are.
Near miss: my first job I was working on a CRUD app for a huge bank. I was dumb and it was early in the enterprise era of software and I had built my own simple O/R tool based on codegen. Not a terrible tool all things considered and I was pretty pleased with myself.

One night in bed I realized that if someone hit submit on the delete screen without filling in any criteria it would just delete the whole database.

Not a fun drive in.

Yes, we drove in in those days.

s.gif
And this is why we use things like database users that don't have delete permission, and row-level security so users can delete things that don't belong to them.

I have learned this from a very similar experience.

s.gif
And why it’s often best to mark a record deleted and then have a reaper remove the records at a later point. But you must make sure all normal queries don’t see deleted items.
s.gif
That's over engineering. At this point, just rely on PITR. FWIW, postgres does have a "reaper" via vacuum, but not for the purpose of safety. but rather to allow for mvcc.
s.gif
>One night in bed I realized that if someone hit submit on the delete screen without filling in any criteria it would just delete the whole database.
s.gif
Lack of validation?

I.e if no criteria, it could be sending a DELETE message with no where clause in SQL land.

s.gif
i remember sitting next to someone who screamed after realising he just did a ’DELETE FROM company' without a WHERE clause. Our database was way too big to backup, so we only had production, but luckily I had rolled out database logging in the inyerface that recorded all UPDATE and DELETEs a few weeks before the event.
s.gif
I had a DBA call me on my way back from lunch absolutely sobbing after fat fingering a semicolon before the where clause while logged in to Prod as root.

we got things restored and back online in a couple of hours. I let her go home afterwards, heh she had suffered enough.

s.gif
I was taught to also use a transaction and then check how many rows were affected before committing.
s.gif
When your database is too big to backup that's a lot like a bank being too big to fail. Unwise long term strategy.
s.gif
Terabytes across 25+ databases not co-located and all being masters replicating to each other, in 2009. Not sure how you would have done it.
s.gif
Easy: no form input no condition on the delete

aaand it’s gone

Back in the Windows Server 2003 days, when you hit "Shutdown" there was a screen that pop'd up to ask you what you wanted to do. Shutdown, Restart, Etc.

I hit Shutdown, instead of restart. The server was in a colo a thousand miles away. Unfortunately at that time, a colo that didn't over overnight staff onsight (oncall yes, but not on site). Also we had no IPMI.

I had to page some poor dude at 1am to drive 30mn (each way) into the colo to push the "on" button. I felt terrible. (Small company, it was our one critical production server)

It was 1985, I was in a VAX computer lab with about 40 other people typing on the VT100 terminals... I ran a program to compute something, and forgot that I had bumped my priority wayyy up... everything in the room stopped, even the line printer, everyone went ohhhhhh.

10 seconds later, my program finished... and everything snapped back to life.

Another time, I walked into a different, bigger lab, with 100 terminals... snooped around the system, saw that the compiler queue had about 40 minutes of entries... bumped the first one up a bit (the queue was set to lower priority than any of the users, which was a mistake)... it finished in 2 seconds, instead of 2 minutes...

15 minutes later, the queue was empty, 30 minutes after that the room was empty, because everyone had gotten their work done.

s.gif
Wait, what? So the system was purposefully hindering everyone's productivity?

Sounds more like a "I brought up production" story...

s.gif
No, it was just a bad configuration choice by the system administrator.
The deployment process for a site I used to work on would entail changing the owner of a folder. You would change the owner from the web server to your user, upload files, and then change it back.

sudo chown -R www-data:www-data [folder]

I’d made some changes and was ready to update the owner only I was inside the folder that needed updating. In the moment I decided the correct way to refer to that folder was /

I noticed the command was taking far longer than usual to execute. I realised the mistake but by then the server was down with no way to bring it back up.

Sent in a SQL command deleting multiple million records (intended) wrapped in a single transaction (not intended). The replication queues could not keep up and failed, bringing down most of the replicas. Master server kept trying to recover and maxed out all connections - no DBA could log in to perform manual recovery. We had to hard reboot not knowing what state the system is in and how long it'll take to fully recover. Did I mention that this was a few hours before trading was going to begin?

TBH, my team was very gracious about it and the RCA focused purely on the events that occurred and how to never let if happen it again. No blame game at all.

s.gif
> TBH, my team was very gracious about it and the RCA focused purely on the events that occurred and how to never let if happen it again. No blame game at all.

Which is how a PIR, PER or PCR should be. If you don't understand why someone makes a mistake, you can't avoid future mistakes.

This was long ago and the details are hazy.

Back in 2003 or so, I was in tech support for a company that used desktop computers running java applets to connect to a mainframe via Telnet (IBM Host-on-Demand IIRC). Most of the core business processes were handled by mainframe apps, which the company largely developed. I used to hang out in the data center with the mainframe guys who coded in COBOL all day.

On a Friday afternoon, I was working on testing deployment of an update to the java terminal client applet. Everything seemed to work fine in testing, and it was a minor update, so (idiot me) I went ahead and pushed it to the server.

Shortly after I pushed it out, the mainframe guys' phones started ringing with complaints that the mainframe was down. Then my phone started ringing. Then all of the phones started ringing.

Turns out, something I did in the update (I honestly can't remember the specifics now) reset every local users' mainframe connection information for the applet. Across the whole company. So as soon as they exited the applet, they couldn't get back in.

That was a fun weekend.

I worked in online advertising and pushed infinite loops that froze browsers to millions of unsuspecting victims.

On another occasion I had a division operation happen on integers instead of floats, and the code was running on some hardware that steered antennas for radios on airplanes. Much time was spent by pilots flying in circles over LA while I gathered data and found the "oops". It was fixed by adding a period to an int literal.

On another occasion my machine learning demo API failed due to heavy load, but only when India's prime minister was looking at it.

Many years ago (decades, in fact) as a fresh new excuse for a unix admin, I needed to hide the passwd binary so that users couldn't find it and change their local password on the terminal box (this was early ISP days). SunOS 4.1.4, as I recall.

Anyway, I hid that binary. In /etc, where they'd never think to look.

Gosh we do some dumb things, eh? LOL. That took a while to find a solution for, and no small amount of luck. The owner of the ISP walked back in the office a couple hours later and said "I heard you had some excitement?" I said, "Oh yes, it was pretty ugly for a bit." "Is it fixed now?" "Yup." "Carry on."

For sure thought my ass was fired and I'd only been on the job a month or so.

s.gif
I don't get it - how would moving that binary break a running system? Is that binary somehow involved in something else beyond password changes?
s.gif
/etc/passwd contains the user database on most Un*x systems. GP replaced it with the executable file, thus wiping out the system's users. Ouch.
s.gif
> Ouch.

Ouch, indeed. We ended up getting lucky and found a workstation where someone had left themselves at a root prompt on another machine that had a shared NFS mount. This was before protection from this kind of attack, so we were able to create a setuid root script and run it on the main server to get root access to fix the broken passwd file.

Our next step was going to be rebooting the server. We were pretty sure that faced with a corrupt passwd file, SunOS would drop to single user mode. Never tested that theory. Glad we didn't have to, the server in question was a hack job as it was. Copied over (literally, as files) from a previous server, it wasn't even 100% in agreement with itself on its own hostname, so I always kinda wondered how it would react to any big changes.

s.gif
Why did you write it Un*x? Is there a Unex or Unox?

I've seen it written *nix to grab Linux and Unix.

s.gif
That has precedent going way back, at least 34 years:

https://unix.stackexchange.com/questions/2342/why-is-there-a...

Doesn't explain why exactly the asterisk was put in that particular position. Maybe someone felt like it was odd to lead the word with an asterisk. :shrug:

I did the reverse. Using DOS batch scripts I wrote I imported orders twice for a 100 person generic drug company. It was a heavy set of orders as it was. Everyone in the warehouse had to work late that day. Sales guys loved it because it was near the end of the month. No one at the company seemed to mind the mistake. I was mystified that I did not get in trouble. They had tight relationships with thier customers and just shipped less over the next month.
(Not me, but someone I worked with)

My first job out of school, working away from home and learning the ropes of embedded software.

The office was using on promise databases, email servers and the like, as was somewhat common at the time, but nothing much more than a a few robustified PCs and some networking infra. We were having internet problems being too far away from the exchange and so the telephone company was coming into replace the exchange over the weekend so everything was shutdown on the Friday night.

Monday morning comes by and we boot things up again, but no connectivity… Office is dissolving into chaos as phones were also down. British Telecom is demanded to return this very minute and figure it out!

An hour later a very flustered gentleman turns up and begins to debug a few sockets but finds them all dead. 1 minute later he is at the new exchange (that was inside our office), only to emerge from the room after 30 seconds looking extremely confused.

It turns out Dave, an extremely helpful chap who was in charge of some product final assembly had turned up at the office as normal at 7am and thought he would helpfully uninstall the old exchange and throw it in the skip we had rented for just that purpose. A quick wonder around to said skip found the exchange in there with a bunch of wiring - the helpful chap had really gone to town on this. Sadly, I was quick to identify that this was the new exchange, not the old by observing simply how fresh it looked and the BT chap came over to confirm. Because of the damage that had done to the wiring, it was not trivial to simply wire back the old exchange and so that was the end of office operations for a week.

A small company meeting was held where it was announced that “an error of judgement” had occurred and that we were to have some vacation - much of the in office equipment was taken offsite to get temporary connectivity so that sales could continue whilst we vacationed. Internet remained terrible until I left that gig, now blamed on all the wire patches needed to get the office back on line.

I'm in the fortunate position of having been able to tell our story in detail on our blog after a major outage involving Cassandra and bootstrap behaviour that we didn't fully understand. This is a story of how I bought down the bank for two hours.

https://monzo.com/blog/2019/09/08/why-monzo-wasnt-working-on...

In summary, we were scaling up our production Cassandra data store and we didn't migrate/backfill the data properly which led to data being 'missing' for an hour.

In a typical Cassandra cluster when scaled up, data moves around the ring a single node at a time. When you want to add multiple nodes, this can be an extremely time and bandwidth consuming process. There's a flag called auto_bootstrap which controls this behaviour. Our understood behaviour was that it will not join the cluster until operators explicitly signal for it to so (and this is a valid scenario because as an operator, you can potentially backfill data from backups for example). Unfortunately it was completely misunderstood when we originally changed the defaults many months prior to the scale up.

Fortunately, we were able to detect data inconsistency within minutes of the original scale up and we were able to fully revert the status of the ring to it's original state within 2 hours (it took that long because we did not want to lose any new writes so we had to carefully remove nodes in the reverse order that they came in and joined the ring).

Through a mammoth effort across the engineering team across two days, we were able to reconcile the vast majority of inconsistent data through the use of audit events.

This was a mega stressful day for everyone involved. On the plus side though, I've had a few emails telling me that the blog post has saved others from making a similar mistake.

s.gif
That’s a great writeup, thanks for all the detail!

I was always worried about something like this happening so only ever provisioned (via ansible) one server at a time. When the logs showed it was fully synced, we provisioned the next node. It could take two days to add 10 nodes but I always felt much safer

Server running bind with multiple people jumping in and making edits to the config. A body goes in and makes an edit but never reloads the service. After that change I went in with my own change - my change was very minor and I knew it was correct so like a fool I didn't run a syntax check - and then I reloaded the service. I didn't even check after the case to make sure it was still running.

Narrator: Bind was not running.

Down goes a media organizations web site.

I had been messing around with some new tool to generate bespoke weird packets all day when in the afternoon I was introduced to my new intern.

I was talking about what garbage our ethernet switches were (this was one of the earliest L3 switches-- full of blue wire bodges and buggy firmware), and how I'd already encountered a dozen different ways of crashing them.

While typing I started saying "and I bet if I send it an ICMP redirect from itself (typing) to itself (typing) it won't like that at all! (enter)" --- and over the cube partitions I hear the support desk phones start ringing. Fortunately it was a bit after the end of the day and it didn't take me long to reboot the switch.

I didn't actually expect it to kill it. I probably should have.

We pushed a CDN config that triggered a CDN provider bug. Took down the CDN's entire presence on one continent. Broke a whole bunch of recognizable sites for a bit.
s.gif
Are you referring to the Fastly outage that happened a few weeks ago, or is this more common than I realize?
s.gif
Although, the recent Fastly outage spanned more than one continent
3 years ago. Took on a large traffic media site running on Wordpress. The usual good stuff - outdated plugins, running on a cheap VPS box, hosting vendor washed hands off saying they can't handle the tech support and traffic.

I was working in a different company back then and I was contacted by this consultancy that specialized in Google Cloud (which was always my personal favorite anyway). I was offered a very handsome pay for just what seemed to be just 4 days worth of work. To me, it sounded really simple, like get in, migrate and get out.

After I signed the contract and everything I got to know the client was promised a $200/mo budget and a very wrong technical solution proposal by an Engineer in the consultancy from what they were paying then which is definitely in multiples of what was quoted. And to make matters more interesting, this guy just quit after realizing his mistake. And that's how I even got this project.

So, I went in, tried many cost effective combinations including various levels of caching and BAM!, the server kept going down. They had too much traffic for even something like Google's Paas to hold (it has autoscaling and all the good stuff, but it would die even before it could autoscale!). Their WP theme wasn't even the best and made tons of queries for a single page load. Their MySQL alone costed them in the 1000s. So, I put them on a custom regular compute box, slapped some partial caching on bits they didn't need and managed to bring the cost to slightly higher than what they were paying with their previous cheap hosting company. All this lead to a 4 hour downtime.

I apologized to them profusely and built them a CMS from scratch that held their traffic and more and dropped their cost to 1/4th of what their competitors are paying. Today, this client is one of my best friends. They went from "Fuck this guy" to "Can we offer you a CTO role?" :)

I make it sound like it's so easy, but it was a year long fight almost bundled with lots of humiliation for something I didn't do just to earn their trust and respect. Till date, they don't know the ex-consultant's screw up.

In retrospect, this downtime is the best thing that happened to me and helped me to understand how you handle such scenarios and what you should do and not to do. In such situations it is tempting to blame other people around you, but in the long term, it pays off if you don't and solve it yourself.

this was many years ago, as a junior dev .. management was stressing out over how to make our app faster -- long query runtimes. Naively, i pitched that we should run the queries in advance so they would be cached. simple enough. we did some dry runs. it looked good to go. we pushed to prod. its sunday night and i'm asleep, the query runner activates .. and our app proceeds to DDOS our data layer. Not only taking down our prod, but the prod of every app subscribed to the data store.

I wasn't oncall, and the oncall didnt have access to the query runner script -- it was on my laptop. So, oncall was desperately trying to fight a fire they couldn't put out while i slept like a baby .. that was a fun monday morning meeting.

The year is 2002, OS is Solaris, trying to compile some httpd add-on straight on the production server (because why not) kept giving some weird error about /etc/ld.so not being right. So junior me does:

$ rm /etc/ld.so*

I ran a bunch of scripts on some aux boxes, and Tokyo stopped working. Can't talk details... sadly
s.gif
Reading through these stories, every one of which is fascinating, I’ve never been more glad that my job does not involve doing anything where the consequences could be summed up as “Tokyo stopped working.”
I'm a guy in two person startup team, mostly handle the technical stuff. Woke up to see some error on our main work machine, which also hosts some of the services, after a sudo apt update. Wasn't the first time and was usually able to fix by just googling what the error is as on SO and running it. Did it again, told me to uninstall nvidia drivers, proceeded to do so, and bricked my hard disk. Was completely gut wrenching. Although most of the important stuff was backed up on repos, still had to rebuild the damn thing. Still not sure what happened exactly till this day
s.gif
Could have been coincidental- the drive was about to die and that triggered it.
As an intern at a major automobile parts manufacturer, I took a hub home for a LAN party. Brought the hub back after LAN party and promptly plugged it into a network port and connected the wrong power cable. This was in the server room next to an AS/400 running production.

Took a long vacation weekend as my error proceeded to shut down all production due to network issues causing the AS/400 to freak out.

Cant run a conveyor belt, or robot, or sensor, production line, or or or if your mainframe isn't working.

s.gif
FWIW, I work for an automotive parts manufacturer today and if our AS/400 is down we still can't report production.
During my training, ~2004, I managed to kill the TCP/IP stack on an IBM mainframe running z/OS by accidentally creating a fork bomb with a perl script meant to test the performance of the newly installed BIND9.

Fortunately, it was on a testing system, SNA continued to work, and the system was due to be rebooted on the weekend anyway, so it was not that bad.

Not myself, but a few years ago, a new coworker managed to accidentally delete all user accounts from our Windows domain while trying to "clean up" the Group Policies. Our backup solution, while working, was rather crappy, so we had to restore the entire domain controller (there only was the one), which took all day, even though it was not that big. Fortunately, most users took it rather well and decided to either take the day off (it was a Friday) or tidy up their desks and sort through the papers they had lying around. A few actually thanked us for giving them the opportunity to "actually get some work done".

Years ago we were using NetApp Filers as storage for our database servers in a colo facility. During a planned maintenance window I installed a NetApp OS upgrade and brought everything back on line. At first it seemed fine but as soon as the database servers got some load they started dropping their connections to the NetApps and everything crashed.

Of course I blamed NetApp and called their tech support screaming for help with their OS "bug". After hours of troubleshooting we finally figured out that the NetApp OS upgrade had included a network performance optimization and it was now sending out packets fast enough to overflow the buffer on our gigabit Ethernet switch. The packet loss rate was huge. Fortunately we had a newer switch back in the office so after swapping that out and repairing some corrupt databases I was able to get production back on line. Didn't get any sleep that night though.

I wrote some software to handle charging customer's credit cards. It worked fine in the dev environment so a week later we deployed it since we hadn't been charging customers at all until that point. We would run bills once a day until all the unprocessed billing was caught up.

Well, in dev, the database was refreshed with prod data every night at midnight, so we never saw the bug in my code. I had a sign error in updating the customer's balance so instead of lowering their balance by the payment amount, my code increased their balance. Geometric growth is an amazing thing. A few days later we had calls from angry customers because we had maxed out their credit cards. Miraculously, I was not fired. In retrospect, I think that it might have been because the manager would have then had to explain why he had not made sure there was not adequate testing on something so central to the business.

Had a script that managed disk space on windows servers. The config file was xml, and Powershell did not treat the commented child element as nothing, it imported a blank child element in the PSObject.

So when it ran it looked in blank directory (default c:\windows\system32), for blank conditions. Zip filetype matching blank. Delete files over blank age.

These servers were rebuilt over a weekend, and the script was scheduled again, and broke these servers again, requiring another rebuild.

When I came in on Monday, I was told that the script had caused this carnage. I didn't believe them until I read the debug logs in horror.

Luckily the config was specific yo a subset of servers, but they happened to be servers that were responsible for police GPS radios to function.

Suffice to say it now has a lot of defensive programming in it to test the config file and resulting config object before doing anything.

Learned the hard way to properly label two identical bits of hardware before working on them.

One ISP router in production with 20k active connections... one "backup" router fresh from the box.

My job was to backup the production firmware and flash the config to the spare box.

The opposite happened and the customer support telephones lit up like a Christmas tree.

Not me, but someone pinged out slack chat asking for us to "please revert".

We didn't realize the issue until he admitted that he had run an update on the full user table (forgot a where clause) and every single email was now being funneled into his email account.

Years ago, my employer was light on funds so we cobbled together plugs to use as a loopback when testing and identifying network jacks. It worked great. Insert the plugs in cubilces then test the open ports in the wiring closet. Worked great many times until one day we plugged it in and went to lunch. When we came back, we were told the network had slowed to a crawl and captures showed floods. This was during the days of primitive DoS via broadcast floods. Well, this flood was self induced. The loopback plug was inserted into a jack that had a connection back to the network hub. It dutifully retransmitted everything it saw back onto the network. Whoops.
s.gif
Exact same scenario but it was connected to the on premise data center. I was imaging devices and figured why not use a switch and provision 4 at a time. Started seeing everything go down figured the network team was doing things. Checked my images and they had stopped, unplug a cord and everything is working. I don't think anything of it and plug it back in and went back to waiting for the images to deploy but the storm started. After about 5 hours customers started receiving their electricity after the network team found my device.
About 5 years ago when I was just starting out I found myself designing a responsive course builder. My solution to a responsive interface at this time involved sending a very large stringified HTML file over websockets.

This wasn't a huge problem, but the configuration on Action Cable (Rails wrapper around websockets), logged the entire contents of the message to STDOUT. At a moderate scale, this combined with a memory leak bug in Docker that crashed our application every time one of our staff members tried to perform a routine action on our web app. This action resulted in a single log line of > 64kb, which Docker was unable to handle.

All of this would have been more manageable if it hadn't first surfaced while I was taxiing on a flight from Detroit to San Francisco (I was the only full time engineer). I managed to restart the application via our hosting providers mobile web interface, and frantically instructed everyone to NOT TOUCH ANYTHING until I landed.

Drilling through a wall, routing a new network line; hit the power line to the server rack. There was a UPS, but it didn't like that kind of short apparently, and folded up into a sulk immediately.

Best part is that I did the wiring in that building when it was built 5 years before that; I really should have realized it was there.

When I started as an intern, I cannot access a file under /etc (or some other privileged top directory). So I need to sudo cat or something each time I want to access the file. Then I decide chmod 777 /. Then the machine cannot boot up anymore...
About 15 years ago, when ssh-ing into servers was quite normal.

In eterm on my gentoo linux laptop with enlightenment desktop I typed: su - shutdown -h now

Because I was tired and I wanted to go to bed. Came back after brushing my teeth. F### laptops and linux! Screen still on. The thing didn’t shutdown!

Strange thing was: in the terminal something said it got a shutdown signal.

Then I realized I shutdown a remote server for a forum with 200k members.

It was on the server of an isp employee, which happened to be member of that site. All for free, so no remote support and no kvm switches. Went to bed and took a train next day early morning to fix it.

s.gif
Is SSHing no longer normal? What do the cool cats do these days to manage their servers?

I use K8s and docker to run software on my server, but initiate these via SSH. I suppose CI is perhaps modern approach or what else is everyone using?

s.gif
Managed stuff like AWS fargate and ECS is what I want to use at work. ATM I've got an ec2 server instance with SSM taking care of it, I don't have to shell in too often.
I have some of these.

Does self DDOS count?

We worked for Flanders radio and television (site was one of Flanders biggest radio stations). The site was a angularjs Frontend with a CMS backend.

The 40x and 80x pages fetched content from the backend to show the relevant message (so editors can tweak it). The morning they started selling tickets for Tomorrowland I deploy the frontend breaking the js fetching a non existing 5x page, looping to doing this constantly. in a matter of seconds the servers were on fire and I was sitting sweating next to the operations people. Luckily they were very capable and were able to restore the peace quite quickly.

And also (other radio station) deleting the DB in production. And also (on a Bank DB2) my coworker changing the AMOUNT in all rows of cash plans in stead of in 1 row (and OR and braquets and trust you know).

I spelled "tariff" incorrectly with "tarriff" in the config file that's parsed on every page load.

My code reviewers didn't notice and we didn't have linting or warnings on that file, so I brought down production :)

Very first job, as an intern, I was tasked with building a "free text search engine" for the product, using their api. Maybe my first week or so there, I left a script running over lunch. Turns out the internal IP addresses weren't subject to the rate limiting, and my script's queries were growing exponentially (I was sending the response back to the same endpoint which was querying with the response, and giving me back a larger response etc..) Within 20-30 minutes or so every production machine was stuck running one of my queries. And it happened on the day that the engineering team were taking the new intern out for a team lunch...

At the time I was mortified, but in hindsight the fact that I was able to do that in the first place was really the issue, not my script.

I had web server access, but they wouldn't give me a DB login.

So I crafted an .asp to do my maintenance.

Only I was calling CreateObject() in the for loop to get a new AdoDb.Connection for each of the array entries of the data.

That creaky IIS server crashed like the economy.

Impacted iOS Kindle users ... We had an oopsie.

https://www.google.com/amp/s/techcrunch.com/2013/02/27/bug-i...

Customer fears were far worse than reality, but my management team (up to and including JeffB) we're not amused.

I worked for a company back around 1999 that used MS Access to do the entire companies payroll. Including the CEO. Once I was awoken at 1am because payroll wasn’t running. In my sleepiness I accidentally manually committed a few database values (that I had never touched before) that gave the executive team 0.00 paychecks. Not a single exec including the CEO noticed for about 6 weeks.
I had a product that had mostly stopped growing in usage. It was running on say fifty machines. I had put considerable effort into some memory optimizations, which was the scaling point for new hardware, so I talked Ops into bunching the active traffic load onto fewer machines. All of the active traffic load. Started hitting the memory limits (32 bits linux) and our server framework exited on malloc failure, so lots of exiting of long lived processes and loss of expensive state, delayed alerts etc.

I still think it was over-provisioned, but they told Ops to stop listening to me unless someone else agreed. Probably ran on the 50 machines till it was discontinued 10 or 15 years later, but I left so who knows.

I was 18 and had just taken over the website of a car and horse trailer dealership. I typed “;rm -fR ~” into an e-mail form to show my coworkers what would happen. On purpose. I quickly restored it (I was ready for this.) They were pretty amused. We had no concept of “on prod”. “Dev” was my local PC. Damn teenager.
I was working on Nationbuilder, a horrible all-in-one master-of-none thing, back in 2016 or so, and ran so many concurrent tasks in our BE that it started affecting all of their other clients. Waking up an engineer in CA was fun
s.gif
That thing is a pile of absolute dog shit. New up and coming UK political party are using it - I volunteered to help with setting up / maintaining their tech stack, realized within 5 minutes all their eggs were in that godforsaken basket and changed my mind.
Slightly off topic but I think in most cases responsibility for accidentally bringing down production likes with management.

However I’ve only ever heard stories where management lays the blame.

Partition with the mSQL database on filled up; I moved it to /tmp. On a Solaris box. Which rebooted some weeks later.
s.gif
But it ran much faster until the reboot... LOL
Doom on a single PC was fun. But the same game on our network with all those broadcasts running through our net, which hosted a university's PCs. That was bad. And I started it several times until I noticed that "I" was the cause for network slowdown.
Good friend of mine once fat fingered something like: sudo chown -R 777 /
heard a few years back, someone accidentally deleted everything across all AWS accounts at our company; had to reach out to AWS and they helped recover everything. Took 6-12 hours to recover
While working at one of the top 3 Global airlines (around 2015), I deployed an experimental feature that streamed the real-time airport indoor location (activated upon entering a geo-fence) into the airline's iOS mobile app used by hundreds of thousands of customers daily.

Setup was, mobile app -> detect beacon & ping web endpoint with customer-id+beacon-uuid -> WAF -> Web application -> Internal Firewall -> Kafka Cluster -> downstream applications/use cases

It was an experiment — I didn't have high expectations for the number of customers who'd opt in to sharing their location. The 3 node Kafka cluster was running in a non-production environment. Location feed was primarily used for determining flow rates through the airport which could then predict TSA wait times, provide turn by turn indoor navigation and provide walk times to gates and other POIs.

About a week in, the number of customers who enabled their location sharing ballooned and pretty soon we were getting very high chatty traffic. This was not an issue as the resource utilization on the application servers and especially the Kafka cluster was very low. As we learned more about the behavior of the users, movements and the application, mobile team worked on a patch to reduce the number of location pings and only transmit deltas.

One afternoon, I upgraded one of the Kafka nodes and before I could complete the process, had to run to a meeting. When I came back about an hour later and started checking email, there were Sev-2/P-2 notifications being sent out due to a global slowdown of communications to airports and flight operations. For context, on a typical day the airline scheduled 5,000 flights. As time went on it became apparent that it was a Sev-1/P-1 that had caused a near ground stop of the airline, but the operations teams were unable to communicate or correctly classify the extent of the outage due to their internal communications also having slowed down to a crawl. I don't usually look into Network issues, but logged into the incident call to see what was happening. From the call I gathered that a critical firewall was failing due to connections being maxed out and restarting the firewall didn't seem to help. I had a weird feeling — so, I logged into the Kafka node that I was working on and started the services on it. Not even 10 seconds in, someone on the call announced that the connections on the firewall was coming down and another 60 seconds later firewall went back to humming as if nothing had happened.

I couldn't fathom what had happened. It was still too early to determine if there was a relationship between the downed Kafka node and the firewall failure. The incident call ended without identifying a root cause, but teams were going to start on that soon. I spent the next 2 hours investigating and following is what I discovered. ES/Kibana dashboard showed that there were no location events in the preceding hour prior to me starting the node. Then I checked the other 2 nodes that are part of the Kafka cluster and discovered that being a non-prod env they were patched during the previous couple of days by the IT-infra team and the Zookeeper and Kafka services didn't start correctly. Which meant the cluster was running on a single node. When I took it offline, the entire cluster was offline. I talked to the web application team who owned the location service endpoint and learned that their server was communicating with the Kafka cluster via the firewall that experienced the issue. Furthermore, we discovered that the Kafka producer library was setup to retry 3x in the event of a connection issue to Kafka. It became evident to us that the Kafka cluster being offline caused the web application cluster to generate exponential amount of traffic and DDoS'd the firewall.

Looking back, there were many lesson learned from this incident beyond the obvious things like better isolation of non-prod to and production envs. The affected firewall was replaced immediately and some of the connections were re-routed. Infra teams started doing better risk/dependency modeling of the critical infrastructure. On a side note, I was quite impressed by how well a single Kafka node performed and the amount of traffic it was able to handle. I owned up to my error and promptly moved the IOT infrastructure to cloud. In many projects that followed, these lessons were invaluable. Traffic modeling, dependency analysis, failure scenario simulation and blast radius isolation are etched into my DNA as a result of this incident.

Early on in my career, I worked for a secret unit of a secret government law enforcement group that handled surveillance. Being young, full of verve, and not nearly as smart as I thought I was, I was always trying to improve things and tinker. Knowing nothing about networking, I plugged a switch into itself. Due to the configuration it knocked the entire surveillance network offline and everyone was freaking out. I was cool as a cucumber, because it couldn’t have been me. Must have been a coincidence right?

Right?

The sense of dread the dawned on me as the former Navy Seal turned Network Engineer (and later doctor) started sniffing around the switch I had just touched was palpable. Luckily for me, He kept my mistake quiet and fixed it quickly.

s.gifGuidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK