Are You Ready for the Next Disaster?

April 15, 2019 Kevin Martin

Recently, a new report on disaster recovery was brought to my attention that got me thinking--with threats of devastating storms, we never think twice about disaster preparedness for our homes. But, do we have a plan for our businesses and customers when disaster strikes?

According to the recent report from CloudEndure, not so much. In fact, during a nationwide survey of nearly 400 IT professionals, only 15% (yes, that's "1, 5") of companies surveyed have disaster recovery plans for all of their product machines, and only 23% reported having a disaster recovery plan with continuous data protection. With ¾ of companies at high risk of data loss and more than 73% of organizations reporting that nearly all of their apps will be SaaS by 2020, the potential for preparedness looms large.

The causes. 

While security and business continuity may come to mind with the looming threats of natural disaster and the ever-present headlines of cybersecurity threats, human error still remains the number one potential cause of disaster and number one threat to data integrity. According to the report, network failures (17%), external threats (13%), and cloud provider downtime (10%) rounded out the top four causes of disaster and data loss. So, not only must IT leaders be concerned with the internal, external, and uncontrollable threats to their system, because secondary to the initial cause of data loss is the quality and breadth of the data recovery plan. The better and more practiced the plan, the more quickly systems are reestablished and the fewer the risks.

The risks. 

The risks associated with a fledgling or non-existent DR plan are many, but the most pertinent stand to be the loss of data, loss of customer satisfaction, and loss of revenue. For most companies, the data are their most valuable assets--the secret sauces, let's say. Without its information, an enterprise collapses. Without data, quality can no longer be ensured, manufacturing comes to a standstill, and delivery is out of the question. To add insult to injury, customers aren't too keen on data loss and system failure either. Whether its their data that's gone awry, or they're unable to access your product (SaaS, anyone?), unhappy customers rarely enter the recipe for success. So, not only do you put your data at risk, but you're risking your reputation, as well.

In fact, according to the aforementioned survey, on 38% of respondents claimed they consistently meet their service availability goals. The last I checked, 'data loss' minus 'customer disapproval' equals 'lost revenue'. According to Rand Group, a majority of businesses report that one single hour of downtime costs them upwards of $300,000. And, this number doesn’t necessarily include lost major accounts and the public relations nightmare that could result of failed systems and information breaches.

The remediation. 

Now that the scary part is over, it’s time to focus on a future state of mitigated risk where your team is ready at the helm for any threat that may come their way. With awareness, acceptance, resources, and reexamination, you are bound for a safer, more secure enterprise.


The first step to implementing a plan for cybersecurity is awareness of any existing threats, and there are two ways to go about this.

The first is to put together a steering committee and commence a complete risk analysis of your systems and processes to identify any potential loopholes. For instance, are you aware of your biggest weaknesses? Are there comprehensive training plans in place for new or recently transitioned staff? Have you audited vendors for their disaster recovery plans, and, if so, are yours in line with theirs? Begin asking these questions of your teams to uncover both the "low-hanging fruit", which can be mitigated in a straightforward fashion, as well as the larger, more challenging issues that may require future state roadmaps.

The second--and less desirable--way to uncover weaknesses is, of course, when disaster strikes. In this case, you have to move quickly and as efficiently as possible to get your systems up and running again. BUT, once this is done, stop. Take stock in the situation, identify the teams and tactics most successful in the recovery effort, and use your analysis as a stepping stone for putting into place a comprehensive recovery plan.

In either case, depending on the scope and expertise of your in-house team, now's the time to consider engaging a third-party team with the experience and expertise to overcome the barriers that stand in your way. 



Once you've identified the first issues to tackle, it's time to take it to the next level--the C-level. Ensuring executive buy-in to your plan is a hurdle in and of itself. However, the first step to this acceptance actually lies in the formation of your steering committee (see Awareness above).

Disaster recovery is hard, expensive, and plans have to be consistently updated--not leadership's dream scenario. But, by engaging senior management early in the process as part of the steering committee, you remove the element of surprise, and they see first-hand the challenges of a faulty program, as well as the successes of those who are prepared. 

(For more tips, take a look at my previous article, "6 Ways You Can Secure Executive Buy-In for Proactive Data Management.)

While expensive and time-consuming, never back down from the notion that, at $300k/hour, a vulnerable and non-compliant system leaves your company far more susceptible than ignoring the issue at hand. 


Now it's "go" time! With a plan in place and buy-in from senior management, you can finally start to put the plan to work. While these plans will differ from company to company and team to team, it's essential to have procedures in place that identify stakeholders, as well as describe potential threats and how to identify them. 

Once the ideal state is accepted, cultivate and develop detailed, straight-forward, and validated SOPs that are readily accessible on a secure system--and not one that is at risk of failure during a potential disaster (maybe even a hard copy handbook...GASP!). These SOPs should then be turned into inclusive training and drills. Fifteen percent of those surveyed in the report noted that they'd never any disaster recovery drills. Which leaves me asking, "Without testing, how so we validate the process?"


Once your plans are in place, like any other enterprise-wide data integrity undertaking, it's always time for reexamination. It's imperative to put into place an avenue of communication with the DR steering team in which stakeholders can report potential risks or looming threats. Likewise, continue the charter of the steering committee, holding regular meetings to discuss potential projects and required budgets. 

One truth that remains steadfast in our industry is the fluidity of our staff. As projects come and go, so do our resources. So, it's important to keep your finger on the pulse of staff fluctuation and have a system in place for dynamic training and drills. With this, you'll ensure responsibility and accountability while knowing who the key players are and keeping systems up to date.  

Finally, external partners should not be excluded from your processes. You must consistently examine your partnerships, making sure you have good contracts in place for providers, as well as the funds in place to put your plan into action. 

There's no time like the present. As an industry, we're on the right track, but we have a lot of room for improvement. The Disaster Recovery Report notes that less than half of organizations meet their Recovery Point Objectives and Recovery Time Objectives, and for a majority of companies, it's the internal IT team who's responsible for managing the entire program. 

Now's time to take stock. From natural disasters to cyberthreats to our competitors, our most precious resource--our data--is forever at risk. And, if you don't have control of your data, it can go away at any moment. 

So, are you ready?

Kevin Martin	 Headshot

Kevin Martin

Kevin has nearly 40 years of FDA regulated industry experience that includes management positions at Wyeth and J&J/McNeil Pharmaceutical. Kevin's experience spans projects conducted within QA, IT/IM, Manufacturing / Operations, Clinical and R&D. Kevin is a former member of the PhRMA Computer Systems Validation Committee, a former chair of the ISPE DVC CSV Sub-Committee, a former Core Team member for the PDA Part 11 Task Group, and past Chair of GAMP Americas Steering Committee, past Co-Chair of GAMP Global, and former Sponsor to the GAMP Risk Management Special Interest Group. Kevin has a Bachelor Degree in Chemistry from Delaware Valley College of Science and Agriculture and a Master of Engineering in Manufacturing Systems from Penn State University.
Connect On Linkedin
Related Insights
From Discovery To Delivery™