IT Pro Panel: Building your backup strategy
Digital transformation is all the rage, but it’s important not to forget the essentials of IT
Wherever you look in the world of modern business, digital transformation is all but inescapable. In every sector, organisations are reinventing their IT stacks from the ground up and rolling out cutting-edge technologies to deliver brand new capabilities.
Although ambitious projects to roll out next-generation artificial intelligence (AI) analytics or omnichannel collaboration platforms are laudable efforts, the importance of IT fundamentals shouldn’t be overlooked. Ensuring you’ve got the basics sorted will provide a strong foundation on which to build a transformation programme – and there are few elements of an organisation’s IT estate more essential than backup.
Managing business data backup is hardly the most stimulating of an IT leader's jobs, but, like getting your car serviced or buying insurance, it's easy to dismiss it as a boring, low-priority chore until it suddenly reaches a crisis point. Building a solid backup strategy, in fact, is a core part of IT best practice, and we asked our panel of expert leaders how they approached this crucial business task.
Size matters
When dealing with backups, one of the first considerations is going to be size. The quantity of data involved will govern a number of things, including what kind of infrastructure is required, how expensive it is, and how long it will take to restore that data in the event of an incident. There are many factors that inform how large a given organisation’s backups will be, including its size, the specific industry or vertical it operates in, and the composition of its IT estate.
As an illustration of this, the size of our panellists’ backups varied wildly; accounting firm Kreston Reeves stores around 100TB, according to CTO and operations director Chris Madden, while Rooster Money’s CTO Jon Smart reports its backups only total 6TB, split across multiple backups.
Director of IT and change for Melton Building Society Rita Bullivant says a complete tally of all of its backups is around 25TB. She does, however, point out many back-office and line-of-business systems are backed up separately by the partner that provides them, which are roughly double the size.
Despite all three panellists working in the financial services sector, the amount of data backed up across the three firms is distinctly different. Part of this disparity is the result of what kind of data they have to back up, and the specific configuration they use to do it.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
As CTO of Hello Magazine, for example, a large portion of Andy Macharg’s 20TB backups consist of 20 years worth of large files, including high-resolution photographs, print design templates and other rich media that organisations in different sectors may not have to contend with.
Kreston Reeves, by contrast, stores full images of all virtual machines (VMs) including system states, data and drives, with multiple copies to enable point-in-time restores. The nature of its business as an accounting firm means this includes vast reams of customer information, while a building society such as Bullivant’s generally holds less data on a per-customer basis. This is on top of the fact Melton’s customer data is backed up by its systems partner.
“We took the view to back up everything,” Madden says, “as it’s in our system, so it must be required by someone – and if data retention policies are in place to ensure compliance with GDPR, then that all joins together.”
As with Smart and Bullivant’s architectures, the regulatory requirements involved in the finance industry mean Madden needs to hold onto certain parts of his data for up to seven years, which can further increase scale requirements. To meet these needs, Kreston Reeves has chosen to host all of its backups using a cloud provider, Redstor, which Madden says offers a number of advantages.
“One of the cool things they can do is we can mount a virtual drive on their system and attach it to our VM in order to review or copy a file, without the need to do a full backup. It also helps should a server ever go down, as we can attach the data drives to another machine.”
Host with the most
The flexibility of the cloud proved to be popular with our panellists, and many of them report that their backup systems are, or soon will be, hosted on the cloud. Macharg’s backup infrastructure is currently hybrid, with NetApp providing a large portion of the company’s on-prem storage, but he’s in the middle of refreshing the organisation’s IT across the board, and says he plans to be “100% cloud” within 12 months.
“It’s part of a wider web platform rebuild,” he explains. “We’re looking for a CMS-as-a-service provider to simplify part of the work, with a front end based on AWS Amplify, GCP Cloud Run, or Vercel. The first site is scheduled for the summer. After that, the time frame is going to be governed by the content migration – we still have static HTML files and two different CMS systems with different data structures.”
“The vast majority of our estate is in the cloud,” says Moonpig CTO Peter Donlon. “For the small amount of physical infrastructure we have, mainly related to our factories, we backup encrypted to AWS S3. For our cloud infrastructure, most AWS services have a backup option that we utilise. For example, we use DynamoDB quite extensively and use its built-in point-in-time recovery as a backup option.”
Rooster Money’s infrastructure is 100% cloud-based, and Smart says the key factors he looks for when evaluating backup systems is the granularity of its point-in-time restore functions, as well as how long it takes to restore from backups.
“We have had a failure of service in the past, which caused the outage to be as long as it took to restore, and then rebuild items that may have missed the window. Our latest solutions (to the key data) offer restore points to the fraction of a second. It required specific technology choices for how the data is stored, which play out in other decisions. We only use that technology for key operational data; for other aspects of the business, we accept that we could experience data loss or longer periods of time to restore.”
The top items on Madden’s shopping list include ease of use, simplicity of maintenance, alerts for failed backups, and reporting, as well as some of the quality of life features such as those offered by Redstor.
“In addition, a further point to consider is whether they back up the backups to a secondary data centre, which is geographically separated,” he notes. “Of course, cost versus benefit also comes into the equation too.”
“I agree with Chris,” Bullivant adds. “It’s important to evaluate time for restore, should it be required – and for us, it’s a balancing act. We want it all, but we need to consider affordability.”
For Bullivant, this aspect is perhaps more pressing than most; Melton Building Society is in the midst of an ambitious – and long overdue – IT renovation project which has seen many systems needing a complete overhaul. Rather than acting as a silver bullet for affordable transformation, she says that moving its backups to the cloud is not currently feasible, partly due to the higher ongoing OpEx costs that cloud platforms bring with them.
“Being blunt, we haven’t found cloud to be more affordable than on-prem in practice,” she says. “We've had to modify our approach to change because of costs; we’ll mitigate the risk with the solution we have now, then after a certain period of time, we’ll move to cloud if we can achieve the necessary level of business growth.”
“Mine is a business in transformation mode, trying to address legacy debt across the board… Once you've picked one thing to fix in a legacy scenario, it leads to a domino effect. The programme to address our legacy has been in flight since January 2021, and finishes completely in May this year - and I’m so looking forward to it!”
Schrödinger's backup
All of these elements must be taken into account when planning how to implement backups, but putting a backup system in place is only the start of the process. Once you’ve configured your backup schedule and got the process up and running, it’s essential to make sure that it’s actually working as intended. All of our panellists agree that, although it’s definitely not a fun job, regularly running tests on backup systems is a key part of ensuring resilience. Without it, organisations have no guarantees their backups will protect them in the event of an incident.
“I have seen a number of times that testing shows issues which, in a real emergency, would delay recovery times significantly,” notes Madden. He recommends organisations keep a ‘run book’ – a detailed guide for how to run particular processes or systems – which will allow backup and restore processes to be carried out even if key personnel aren’t available.
This also goes hand-in-hand with robust failover plans, providing business continuity when services or applications are disrupted. Kreston Reeves uses a full disaster recovery service with a third-party data centre the organisation can fall back on if it needs to, while Rooster Money’s cloud infrastructure allows systems to be spread across multiple territories.
The question of how regularly tests should be carried out is somewhat down to preference. Bullivant, for instance, thinks they should be done at least once per quarter, while Smart aims for bi-annual testing. Madden, Smart and Bullivant all reported that full system tests generally take around half a day to complete, but while it might be tempting to put that time towards more directly productive use, Smart notes that it’s a valuable time investment in the long run.
“It can be a pain in the arse, but we do have to be disciplined to make sure we do it,” he admits. “I usually convince myself that I would rather do it when it’s not required, than do it when it’s critical and many people are chasing for status updates – it’s far less stress.”
Adam Shepherd has been a technology journalist since 2015, covering everything from cloud storage and security, to smartphones and servers. Over the course of his career, he’s seen the spread of 5G, the growing ubiquity of wireless devices, and the start of the connected revolution. He’s also been to more trade shows and technology conferences than he cares to count.
Adam is an avid follower of the latest hardware innovations, and he is never happier than when tinkering with complex network configurations, or exploring a new Linux distro. He was also previously a co-host on the ITPro Podcast, where he was often found ranting about his love of strange gadgets, his disdain for Windows Mobile, and everything in between.
You can find Adam tweeting about enterprise technology (or more often bad jokes) @AdamShepherUK.