Without a backup, your pants are on fire

Back to Blog

Without a backup, your pants are on fire

I shouldn’t even need to write this article. It’s silly, but I have to do it anyway.  So here it goes…

Not having a backup of mission critical applications and data is an emergency equivalent to having your pants on fire.

Most business owners and stakeholders already assume that you have a backup–they assume that, even in a disaster, there would be some way to retrieve data, to get back up and running, in some amount of time. So when you know about the problem, and do not address this issue head on, what does that make you? Oh, I know: you’re a liar-liar…

Let me expound further in case you need any clarification on what I’m saying. Just imagine for a second that your pants were, in actual fact, on fire. At this very moment. You just realized, “Holy eff, I’m on fire!

If this were really happening to you, everything else in your life just became less important than solving that problem. Every single urgent task that you had on your to-do list for the day would instantaneously become subordinated to one job: GET THAT FREAKING FIRE PUT OUT RIGHT FREAKING NOW!

So if you are sitting here reading this, and you still don’t have a backup of something important, or you have serious doubts about your backup, then stop what you’re doing, and go figure out how to get just one backup. Right now. Go.

Okay, I hope that analogy clears up this issue. Forever.

Excuses 

In case it doesn’t, consider the following *fictional* situation. I want you to meet Jim.

“Jim” inherited responsibility of a rather large network (by SMB standards) with more than its fair share of problems.

Within his first few weeks of getting to know this network, Jim had done a good job identifying major gaps & vulnerabilities. He pointed out the lack of backups for certain applications and services, including an entire system dedicated to an application that brought in major money for the company.

He also ran some fancy tools to help highlight stuff like missing security patches and open vulnerable ports, and he had spent a couple of weeks pouring time into drafting up an elaborate  infrastructure upgrade plan complete with dollar amounts and equipment lists, to alleviate issues and improve performance and resiliency.

Additionally he was able to step in and help users out with some long-standing issues, and things were looking up. Users were pleased to have support from someone who seemed to know what they were doing.

But guess what? After a month of being in charge of this network, he still didn’t have a backup of that major money-generating system. Ironically, to add insult to injury, he had also become aware of the fact that the same system also had a failed component. Luckily there was a second such component keeping it all running, but another could not be lost at this point.

A couple of weeks passed by as he tried to order a replacement part from the manufacturer, only to find out the system was not under warranty, and getting a support contract back in place was going to take a few weeks.

By now you can see where this is going.

In his defense, Jim did, at least, bring this issue up to management. “We need this part replaced ASAP,” he said. “It’s going to be expensive to get the support or pay for it out of pocket, but the alternative path is much more expensive, as we could lose this system.

These things were true.

But do you know what was even more important than getting a warranty for that hardware, or a replacement part?

How about getting a freaking backup?

So when the outside consultants eventually got involved, they naturally asked, “So Jim, how is it that we do not have a backup of this system at this point–you’ve known about this issue for at least a few weeks?

His reply was that the current backup infrastructure was overtaxed, and they couldn’t add new job definitions because there was no storage to accommodate it–they needed approval and budget for those upgrades before proceeding.

…You know what dad used to say: “Excuses are like asses…

I wonder how many hours went by during the inevitable disaster event, before someone in management watching money disappear said, “I wish we would have spent a little more money proactively to prevent this…“? (Hint: usually less time than you think).

Do Something, do Anything. Just don’t do Nothing.

Look, I don’t care by what means you put the fire out. If you have to jump into a lake of acid to snuff out the flames, it might just be worth it.

And you know what? I know it is just about as painful, inconvenient and embarrassing, but if you had to drive down to Best Buy, and get an external hard drive (or five) for your small enterprise’s (temporary) backup solution, could you do it? Heaven forbid, you might have to plug consumer grade USB drives into your server or a nearby workstation in order to grab a one-off backup with some kind of free utility, but at least you’d have SOMETHING.

In a Disaster Recovery scenario, having something is infinitely better than having nothing. Any backup is better than zero backup, even if you’re just using something free and simple like robocopy, Windows Backup, the free version of Veeam, or *gasp* dragging & dropping some files.

Don’t make Jim’s mistake. If you don’t have a backup of something, or your backup has become suspect for some reason, then stop what you’re doing. You need that backup. Right now. Put the fire out. Right now.

Treat bigger issues and design problems later. By all means, get the right solution in place eventually–so that it is monitored, automated, offsite, etc.–just don’t dick around with anything else until at least one backup is done.

Why do I even need to relate this story? Like I said: it’s silly. But it’s because “Jim” actually exists in real life (OK, I lied, this story wasn’t fictional–whose pants are on fire now?!), and his situation is more common than you’d care to believe.

Comment (1)

  • Vinny Reply

    This has to be one of *the* best blogs out there right now. Alex, I love your humor. The amount of effort to create a blog like this much take up so much time especially with all the annotated screen shots.

    I am actually a network specialist so when a friend of mine 7 years ago said my IT guy has left and I want exchange I built him SBS2011 with all of the eggs in one basket. All had been well these last few years but I got a call from him to say I now want cloud and a new server I fobbed him off for months on end as I was too scared to undertake this as my specialty is routers, switches and firewalls not a sys admin… This has been my go to website to help my friend achieve his migration to server 2016 and Alex has been invaluable with his guides and his support over email.

    January 1, 2017 at 8:50 am

Leave a Reply

Back to Blog

Helping IT Consultants Succeed in the Microsoft Cloud

Have a Question? Contact me today.