All insights

Beyond Recovery: Breaking the Legacy Paralysis

Greg van der Gaast Greg van der Gaast
Independent Security Strategist

October 02, 2024

Beyond Recovery: Breaking the Legacy Paralysis

Part 4 of a 5-part series

One of the biggest sources of cost, crippled capability and reduced agility in IT nowadays is surely “legacy” IT systems. It doesn’t matter if it’s 30-year-old on-premises environments in banking, telecom or higher education, or 5-year-old cloud-based start-ups. Almost everyone has some, and the nature of the beast is that these systems are often some of the most critical to the business.

So, in the first three instalments of this five-part series we talked about how we can improve our security outcomes over time by tackling the issues causing the business (which includes IT) to introduce risks into the organization.

This approach means everything new will be better from a quality and security perspective than what came before. That includes new systems but also new processes which improve the security of existing systems, too. Think of an improved patching process, or better identity management, for example.

But what about those legacy systems you have now that are so entrenched that they cannot be easily replaced? And for whom general process improvements won’t make a big difference due to their particular or peculiar nature? Systems where, due to any number of factors, the business risk of fixing them is perceived as too great.

Breaking Our Dependence on Legacy Systems

Just to be clear, I’ll loosely define “legacy” environments as those where the [security] functionality may not be an ideal fit for the business’ requirements due to a lack of planning or foresight. Or where elements were not built with long-term support in mind, or are so poorly documented and complex, that making changes to them is simply too risky due to the likelihood of unintended effects.

Because these environments often support large parts of core business functions, the additional costs of keeping these troublesome environments alive is perceived as justified. Replacing them for more easily maintainable solutions is simply too expensive, due largely to the need to reverse engineer, understand and mitigate risks in an eventual transition.

Their criticality also means that their security is extremely important. Yet for the same reason, they are difficult to update, patch and secure. Modern solutions don’t integrate well – it can be like blending oil and water – and making code or system changes to enable security features or visibility is often too risky.

Efforts to change legacy environments are often met with fierce resistance. Companies are concerned not just about the cost of such changes, but also their potential impact.

Leaving the Fear Behind

It is this resistance, this reluctance, that makes technical debt snowball and entrench itself. And once that’s happened, it’s very difficult to move those environments on. Instead, we build costly and often manual systems and processes around them, both to secure them and to add functionality which could and should be provided by the core system that we’re too scared to touch.

But what if we could get rid of all this fear?

While most organizations use their solution to store a copy of their data and systems in case disaster strikes, the fact is they are sitting on a copy of their environment. And that copy is living on storage systems that have incredible I/O and optimization technology; systems able to run that data and create a digital twin of your environment.

Better yet, the optimization technology at play introduces the possibility of Thin Digital Twins, which are functionally identical copies of your systems using only a fraction of the storage. Meaning you actually need very little capacity to run very large twin environments for testing and simulation.

My first introduction to Thin Digital Twins was in the context of penetration testing. The advantage there is that most organizations don’t do penetration testing thoroughly, and rarely fully test their most sensitive production systems for risk of disruption, if they test them at all.

But what if you instead had a functionally identical replica of the whole environment that could be used, with no risk and no impact? It could be tested completely and with what could otherwise be considered dangerous ferocity.

You’d get much more thorough results, with far less planning overhead, because disrupting a twin doesn’t matter. You’re not altering the stored environment. And with virtually zero risk, which is far better than even the most carefully planned (and expensive) white glove approach.

Also, Testing or Staging environments are often used for penetration testing, and these environments are not actually true functional replicas of their production environments. A real twin of production will always be more accurate and less likely to lead to a critical issue being missed.

Breaking the Cycle of Paralysis

But what if we applied the above principles to break the cycle of paralysis that both create and entrench legacy environments?

Removing the fear and risk of making changes to these environments can lead us to rapidly dissect, break-test and simulate changes to legacy systems. That allows us to dramatically bring down the cost and time requirements of pulling those systems out of a legacy state.

From a cost calculation standpoint, it changes priorities because what before was too expensive might now become a cheaper option than the status quo. This means there’s now financial motivation (and business support) to fix the legacy problems we’d previously been saddled with.

By being able to simulate a process change, configuration change or even an entire replacement system by leveraging the replica of our environment that sits in our backup/recovery infrastructure, the time and effort required to fix or replace the system with something better is cheaper than keeping the old one running.

Of course, these principles are most pronounced with legacy systems but can be used for any other system or change, not to mention aggressive testing. All of which helps you unblock, accelerate and dramatically cut the costs of your security transformation.

These benefits aren’t limited to improving the security, either. Business functionality can also be added more easily, further benefitting the business and the bottom line.

To recap, in previous instalments of this series, we introduced the fact that sustainable and continuously improving security is a by-product of quality that needs to be implemented as part of a holistic and strategic approach.  And how storage and recovery capabilities can act as an enabler and accelerator, thereof.

The ability to rapidly test and prototype a perfect digital twin of your environment is another way to accelerate this progress further. And the ease with which it can be done can have significant impacts to the costs, and therefore, financial prioritization of activities.

It makes it more likely that the best approach for the bottom line, even in the short term, is to fix things properly now. And as we discussed in our last instalment, that kind of financial justification is sure to garner support from the business for your security transformation.

What Would You Do with a Digital Twin?

I’ll leave you with some simple questions:

If you could generate digital twins of any system, or even whole environment, and test or simulate anything with recklessness, what would it be?

What would bring you the greatest security or cost improvement?

What kind of change could you drive then?

Good food for thought…

Join us next time for the final chapter in our series, with a deeper look at the upcoming DORA act that also goes beyond recovery.

Read the Previous Articles in This Series


Greg van der Gaast

Greg van der Gaast

Greg van der Gaast started his career as a teenage hacker and undercover FBI and DoD operative but has progressed to be one of the most strategic and business-oriented voices in the industry with thought-provoking ideas often at odds with the status quo.
He is a frequent public speaker on security strategy, the author of Rethinking Security and What We Call Security, a former CISO, and currently Managing Director of Sequoia Consulting which helps organizations fix business problems so that they have fewer security ones.