Navigating devirtualization as businesses move away from the cloud

A CGI visualization of cloud computing, with an isometric view of a purple and blue cloud linked to seven glowing cube nodes, to represent devirtualization and revirtualization.
(Image credit: Getty Images)

The landscape of virtualization is rapidly shifting, with Gartner having recently noted that infrastructure and operations (I&O) teams are actively exploring their public, distributed and private cloud options.

This involves some organizations leaning towards either devirtualization, where workloads are migrated from virtual to physical environments, or revirtualization, where workloads are moved back to virtual environments, or to a different provider.

According to the analyst firm, organizations are rethinking their cloud strategies due to vendor license changes; in particular Broadcom’s acquisition of VMware, which drove some customers to consider alternative virtualization providers.

Vendor lock-in, privacy and regulation are also concerns, as is the increased space, power and cooling requirements that come from power-hungry technologies such as AI.

David Terrar, CEO at the Tech Industry Forum, believes another factor for these changes is companies who’ve jumped from on premise to cloud infrastructure with a less than optimal plan, while Andrew Buss, a senior research director at IDC, points to evolving views around cloud.

“We’ve largely recognized that cloud is an essential tool for delivering IT services but it’s not the be all and end all,” he says. “We still see a lot of cloud-first strategies, but there’s been a change from the wave of cloud-first mandates and chants of ‘on-prem is dead’.

“Our research generally shows a lot of companies want to maintain a fair amount of private IT infrastructure, whether that’s for privacy, sovereignty or other business reasons. But they also want to use public cloud for certain workloads, so overall it’s now seen as a tool in the armory.”

Balancing revirtualization and devirtualization

Very few organizations are choosing to fully revirtualize or devirtualize; rather they’re balancing set-ups based on business, technical and workload requirements.

What an organization chooses is dependent on a number of factors including cost, simplification and performance notes David Hewitt, a member of techUK’s Cloud Leadership Committee and IBM UKI’s automation platform leader.

“Revirtualization is often adopted by those wanting to bolster resource efficiency, optimize costs and to help with overall compliance, whereas devirtualization is used by those wanting to adopt modern architectures while reducing complexity,” Hewitt explains. “The latter has been identified as a long-term strategy for many organizations.”

Devirtualization can be a challenging option however, as committee chair and KPMG partner Adrian Bradley explains.

RELATED WHITEPAPER

“We’ve grown used to being able to customize our resource allocation to application needs and losing this flexibility, even with services such as bare metal as a service (BMaaS), will result in significant overprovision of capacity. A more common path is for enterprises to take steps to avoid lock-in or migrate to providers where they can secure a better deal while minimizing rearchitecture. The market is seeing a greater level of change in providers than we’ve seen for some time.”

Most organizations are also taking a broad view over a multi-year horizon, Bradley notes. Migrations are expensive and introduce risk, so they need to deliver commensurate value, however cost inefficiency develops quickly. Therefore, few are leaping into wholesale changes of their strategy, instead reviewing it more frequently to ensure they get the right balance.

As part of this, some of the decisions IT leaders taking are changing he says, specifically to focus on where in the market they get the best value for commodity storage and compute, without jeopardizing access to innovation.

“This will continue to evolve, especially given the UK Competition and Markets Authority’s (CMA’s) ongoing hyperscaler investigation, which is particularly focusing on licensing,” Bradley says.

Reevaluating your IT stack

When making any decision, business outcomes should be prioritized over technology. Focus on your organization’s needs and goals – are you seeking to improve agility, reduce costs, enhance security? Ultimately the decision should be driven by a clear understanding of your business objectives and a willingness to adapt.

When looking into options it’s important to evaluate hypervisors, hyperconverged infrastructure (HCI) and containerization, as each approach offers unique pros and cons that can impact your IT environment.

In the case of hypervisors, benefits include hardware virtualization, strong security and isolation but it may not be the most efficient option for all use cases, especially cloud-native applications, says Bradley.

HCI simplifies management with a single-pane-of-glass view for compute, storage and networking, offers ease of deployment and compliance – especially for distributed environments and enables hybrid workload portability and placement options. It can require hardware and software integration however, and may not be efficient for large-scale, low-latency applications with mixed workloads.

Finally, containerization provides a lightweight and portable environment for applications, enables rapid scaling and agile development and improves resource utilization compared to virtual machines.

“Challenges include ecosystem immaturity and fragmentation, and potential security concerns due to less isolation than virtual machines,” Bradley notes.

Best practices for transitioning technologies

Whatever transition you may be taking, it’s key that business continuity is maintained.

Cloud technology influencer Richard Simon, CTO for Cloud Professional Services at T-Systems International, recommends that migrations initially focus on the ‘low hanging fruits’ of workloads, which can be easily migrated.

“This not only avoids any ‘bad press’ internally or externally if a major application migration wasn’t successful, but also gives all IT teams – and end users and stakeholders – vital experience with the migration process,” he explains.

Any transition would require a top-level framework and strategy to ensure security, future scalability and efficiency, continues Hewitt, explaining that interoperability and compatibility will need to be at the core of the strategy.

“This can be executed via policies and standards with a clear focus on use cases for various virtualization types. This should also encompass resource and performance management, and automation technologies can support this. Once a top-level strategy is formed, there are additional automation tools that can help to streamline provisioning and updates.”

Looking forward, providing ongoing learning opportunities for IT teams to create a culture focused on adapting to the latest trends and emerging technologies is crucial.

Traditional I&O skills aren’t enough for the modern hybrid cloud world notes Helen Jackson, business manager at support and service provider Shaping Cloud, and a member of techUK’s Cloud Leadership committee.

“Key areas of development should include cloud architecture and governance, automation and Infrastructure-as-Code (IAC), FinOps, and security and compliance. Organizations must invest in upskilling, or risk having legacy infrastructure teams that struggle to manage modern cloud environments,” she concludes.

Keri Allan

Keri Allan is a freelancer with 20 years of experience writing about technology and has written for publications including the Guardian, the Sunday Times, CIO, E&T and Arabian Computer News. She specialises in areas including the cloud, IoT, AI, machine learning and digital transformation.