How to prepare and prioritize workloads for cloud migration

A CGI render of a cloud glowing cloud symbol on a high-tech display, surrounded by nodes which represent the cloud market. Decorative: The cloud is encompassed by a glowing orange circle, while the nodes are blue. All are against a dark, reflective background.
(Image credit: Getty Images)

Failing to plan and prepare your workloads for a cloud migration is in effect planning to fail – but the devil is in the detail no less than in other potentially complex IT transformations.

Jon Collins, senior industry analyst and vice president of engagement at research firm GigaOm, warns that moving workloads to the cloud without considering suitability has gotten organizations "stuck". In these situations, contracted cloud resources can be wasted.

"Cloud's great for quick-starts, certain answers, quickly spinning up applications, testing, and learning,” Collins says. “But you can't be hazy about it, then need 65 meetings to work things out or nothing gets built."

To begin with, businesses must analyze internal operations models. Remember: the base resource cost will likely be a similar order of magnitude to on prem. Governance is critical, especially for jumps into AI, so it’s imperative to scope out what you want to move and its cost today, then cloud, rather than pursuing a cloud "at all costs" approach.

Where 'lift and shift' offers savings, specifics still require close attention. Can you actually acquire and deploy what you need to see benefits? Dependencies between systems and services may go down the stack even with lots of virtualization.

"Solve for any lack of decision-making, and you might be three-quarters of the way there," Collins adds. "Play with ideas, consider sandboxing to build strategy skills and technical skills."

Cloud-native visions of being able to shift vendors and around multi-cloud to suit aren't yet reality, he notes. All the different options for applications need prior consideration, from virtualization to rationalization, migration, and modernization or automation.

"Do you need a thing at all? If not, turn it off," Collins adds.

Large language models (LLMs) may, in the future, help model workload performance and create inferences for better migration decisions. However, beware of hallucinations, Collins warns.

Jake Madders, co-founder and director of Hyve Managed Hosting, says cloud migration must be at the right time, and can be price-competitive depending on estate size and server infrastructure. Compute-intensive, process-hungry workloads need careful examination, especially for smaller organizations.

"A very small workload running on one to three physical boxes, chances are, can be cost-effectively virtualized."

Where to start with cloud migration

Madders prescribes first looking at physical machine specs including for number of processes, storage, and RAM versus current utilization. He tells ITPro that IT teams should analyze and document running applications to construct an idea of how the environment should look – remembering that some customers with heavy AI compute or similar may not be appropriate for the cloud.

"Average workloads such as remote desktops or storage servers should be fine. Look at interconnections. Look to match the power but virtualized," he says. "Consider tooling. There are more advanced methods in a huge environment."

Online calculators exist that can rough things out – but Madders says they typically ask for real hard-to-answer details such as how many times you're doing a specific read – so it's "very loose". Meanwhile, application updates and cleaning up your data can be worthwhile, especially if running on old, unpatched, or otherwise vulnerable infrastructure.

"You have to see what you can do. Maybe point the cloud infrastructure to that old physical server and clone it in block by block, so it comes into the cloud as legacy but virtual," he says. "Sandbox that and isolate it from the internet, over time bringing developers in to update it."

Customers may need an offering unavailable on public cloud, with a certain amount of cache for example, that will need to be sourced and made to interface with the rest. Heavy processing may go into dedicated servers or bare-metal connected back to a cloud infrastructure, he notes.

Availability, scalability, and vendor lock-in potential are other reasons to take advice before migration -- not least because costs can spiral if caught short.

"Costs can continue to ramp as you bring in more infrastructure and customers. A big provider can come in, helping run and streamline it, but costing another five grand a month too," Madders warns.

Howard Weale, vice president of global systems integrators at Cockroach Labs, adds that complicated environments can prove essential to achieving desired availability, seconding the importance of skills and experience.

Discovery of "where the bodies are buried" with tooling is critical even if, prior to a recompile for instance, it looks like things still work.

How long is a piece of string?

Weale says leaders must fit in workload migrations according to the specific strategic, technical, and tactical business objectives that matter most, in the order that makes the best sense.

A tactical technical decision could be because you want to avoid an expensive hardware refresh coming up in 18 months. Which applications consume the most energy, or need lots of CPU or GPU power?

A business strategic decision might be about a backlog of features, or function requests that stop you from making inroads against rivals "eating your lunch". "The different workloads fall into this quadrant-type architecture where you can focus," Weale explains.

Then analyze application dependencies and affinities, prioritizing early runs on the board to keep projects moving and overlaying these to the objectives quadrant. It’s also important to consider what can move separately or must be moved together.

"If you're smaller, it does overly index on choosing the right tools, getting the right partner, less so with the larger enterprises," Weale says.

Typically, "lift-and-shift" virtual machine (VM) migrations can go first, obviously solving for external dependencies and the like -- such as if external calls to hardware encryption devices are needed, entailing a cloud service with encryption and source-code changes.

Next will typically be database migrations. Customers may want a cloud-native database and one that understands nodes, versus running things through a single CPU. "3,000 nodes running one database is not a concept familiar to all legacy database users," Weale notes.

"And look carefully at what hasn't run in production for a year or two, versus what just sits in libraries. If 30-40% of the estate doesn't need bringing, your project is much smaller."

Resource-intensive workloads are typically last, with mainframes being particularly difficult, time-consuming, and complex.

Consider which business units run which programs and applications. With a bank, for instance, treasury might go first, then wealth management and corporate investment. Core banking activities, with major regulatory requirements, typically come last of all.

"Have business-unit owners expose their applications to you. Decide disposition strategy for each of those applications," Weale says. "More strategic applications driving top-line revenue growth might need refactoring and different tools, or entire rewrites."

Depending on company size, more or fewer resources will be available, a fact of which leaders should always be mindful.

Fleur Doidge is a journalist with more than twenty years of experience, mainly writing features and news for B2B technology or business magazines and websites. She writes on a shifting assortment of topics, including the IT reseller channel, manufacturing, datacentre, cloud computing and communications. You can follow Fleur on Twitter.