Q&A with David Cartwright
The latest member of our expert panel to sit in the editor's hot seat is David Cartwright, who reflects on a varied career
1) Describe your background: how did you get into this business?
Since I graduated in the early 1990s I've followed a fairly normal path from IT grunt to IT manager, with a few years in the middle where I took a shift into full time technical writing (anyone remember Network Week, Network World and Internet Magazine?). In 2009 I started working for companies with proper data centre setups (by “proper” I mean dozens of cabinets, diverse data centres, global networks, that kind of thing). Around that time the companies I was working with were moving towards virtualisation – primarily VMware. Although I was primarily a network guy I moved to a local cloud company, and via a pile of training courses refreshed my server skills and got into this server virtualisation lark.
2) When did you first become aware of cloud computing? What were your first impressions?
I came across cloud computing around 2010 or 2011 when the vendors started to push it as a concept and the corporates I was working with began to see it as a believable idea. If I'm being honest I often think of cloud as one of those concepts where a new name is given to something that's been around for a while: it's just an evolution of a managed server model into a virtualised environment with a funky multi-tenanted wrapper around it. That's not a bad thing, though: managed servers are a great concept, and adding virtualisation is an even better one because it means the provider can do more processing with the same amount of tin and hence bring the price down for the customer.
I do find it a bit barking that we're now seeing "Bare Metal Cloud" though – that's stretching the renaming of a dedicated managed server a bit far.
3) You’ve been on both sides of the fence – as a customer and a provider. How do you think the two sides compare? Do they understand each other or are there areas of conflict/misunderstandings?
The problem with being a supplier of any managed service is that there's always the selection of customers who want something that you don't provide. Or something that you sort-of provide but not in the form that they're hoping for. Since it's pretty easy and cheap to set up a cloud provision business there's a lot of competition, which means that vendors are reluctant to say “no”. There's a danger, then, that you can spend a lot of time doing tweaks and development work to accommodate customers who are only going to make you a few thousand quid a year.
Cloud Pro Newsletter
Stay up to date with the latest news and analysis from the world of cloud computing with our twice-weekly newsletter
From a customer's point of view, it's hard to find a provider with the right combination of services. So that's geographic coverage (including consideration of data export regulations), server quantities, high-speed connectivity between customer and cloud infrastructure, the ability to split the world into different resource pools and self-defined IP address ranges. I can think of companies that would probably make the move if a reasonably local, flexible cloud provider could support the huge volume of servers they needed to do, but these requirements tend to be mutually exclusive: you can have flexible or you can have big, but not usually both.
4) What’s the craziest thing anyone has said to you about cloud?
That virtual networking is the next big thing. You know, this idea of emulating a layer 2 network on top of a layer 3 network so you can have what looks like direct connectivity between your geographically distant systems. Layer 2 in the wide area makes as much sense as trying to make IP networking run on an ATM network. And if you really want it, go to someone like GT-T and buy a layer 2 service, or just bite the bullet and get an MPLS network like everybody else. Remember also that the requirement for layer 2 connectivity often derives from wanting to cluster stuff like databases – which tend to have connection latency requirements that fall foul of the laws of physics in the wide area.
There's no doubt that it'll be useful at a service provider level, where the customers don't have to see it or understand it. I even know something who understands it and has used it in anger (he's a dead clever former cloud guy who's now head of technology services for a largish retail company).
Virtual networking isn't the next big thing. It's too hard. It's a bit like VLANs were back in the 1990s – a black art that nobody really understood for a few years, but which eventually became clear. Yes, it'll be a prominent part of what we do in a few years once people understand it, but not yet.
5) What do you, personally, think are the biggest technical challenges facing cloud?
Oooh, that's a hard one. I can't help thinking that the actual infrastructure side of life has been pretty well bottomed out: the likes of Amazon and Microsoft have demonstrated that virtualised server installations can scale very well, and the plethora of small cloud providers have shown that with inexpensive hardware and connectivity they can compete in a highly populated market.
The main challenge is, in my mind anyway, connectivity. The perception is that the cost of connectivity has come down over the years. While that's true to an extent, one has to consider that the bandwidth requirement has gone up. So yes, you pay far less today for a 10Mbit/s link than you would have done five years ago, but the problem is that you now probably need 15Mbit/s, or maybe 20Mbit/s, as all your applications have become hungrier for bandwidth. Latency's a big deal too – your cloud installation is generally going to be quite a few milliseconds, latency-wise, from your users and (if you have one) your DR site. This is why we're seeing the likes of RiverBed putting more and more kit into cloud providers' data centres – by reducing traffic you can get more over a given link, and by being clever with protocol spoofing you can make applications think that the interconnect distance is shorter than it is.
Yes, the servers and storage will have to get faster, but that's not a technological challenge – it's more of a financial one because we can count on the technology becoming available, it's just a case of making the margin one makes from the customer fit the cheques that you need to keep sending to your Cisco dealer. The connectivity's going to be the weak link for a while yet.
6) What do you think the cloud landscape will look like in five years?
Integrated, with the telcos in control and a focus on software (by which I mean individual apps) as a service.
Put yourself in the shoes of a company that wants to move to the cloud. You're a reasonable sized company and you want to implement disaster recovery, so you want different data centres in different geographical locations, and preferably with different providers. You need diverse links between the locations, each one provided by a different telco in the interests of resilience.
To a certain extent you can do this, but you have to do some of it yourself. Signing up different suppliers is fine, but getting them connected together will be fun; only a handful will give you anything other than a VPN facility to connect to anything else. And you often have to set up the VPN yourself on the virtual edge firewalls in your cloud setups.
Imagine a world, though, where the cloud providers are in bed with the telcos. The cloud providers connect natively, with high-speed links, into the telco world, and the customers do the same. A friend and colleague of mine, who lives a double life as a genius in the field of dreaming up clever stuff and making money from it, pointed out to me a while back that if you control the hub – the point through which everything flows – you have the opportunity to charge people for it.
Imagine you're a telco with 20 customer sites and 10 cloud providers connected to your network. Some of the cloud providers compete with each other on large volume, small margin stuff (storage, basic server provision, backups, and so on) while the others are monopolies with niche products (so maybe they have an Oracle Financials supplier, and a SharePoint supplier, and a SAP supplier). The telco is the focus for distribution of the applications from the suppliers to the customers, and can control who can see what and when. By having multiple layers of telcos it's imaginable that some kind of brokerage could go on whereby a customer could request a type of service and have it dished up by a telco other than the one they're directly connected to. And thrown into the mix is the ability to have customers connected together natively too – either because two sites belong to the same customer or even because one of the customers has a home-grown package that they've sold to someone else as a managed service.
The infrastructure of the average cloud setup will be largely unchanged. Yes, we'll have more software defined networking and that kind of thing; yes, the storage side of things will have evolved so that we can cram even more data than before on the same amount of disk; yes, everything will be faster and it'll be possible to grow and shrink the resource a customer uses in milliseconds without server reboots or user interruption; yes, we'll still have the nonsense of rebadging 1990s concepts with the word "cloud" (a la “Bare Metal”); and yes, everything will somehow be bigger but will take less power. But that's just evolution, not revolution.
But alongside all this evolutionary stuff the networks will be faster and they'll be integrated. Customers will be able to buy their apps from anywhere and just turn the service on via their telco's web GUI, and maybe the overall amount of storage everyone has will reduce because with integration comes the ability for a supplier to say: “I don't need to store this, because those three guys over there already do”.
Oh, and there's every chance it won't be called “cloud” any more. It'll just be “our infrastructure”.