Since joining the EVO:RAIL team eight short and eventful weeks ago. I’ve been kept awake at night thinking about hyper-converged virtualization – because when I’m excited about a technology from VMware, I often can’t locate off switch for my brain! I’ve spent the last couple of weeks on the developer side of the hands-on-lab, and attending a local Proof of Concept meeting I’m starting to get a feel for what I think needs to be asked. In addition to this I’m doing a round of VMUGs and podcasts – and I’ve been getting all manner of questions fired at me. Some questions I can answer right now, but some I have to find the answers for, and for others sit back and have a good think about what the right answer would be! This is my own personal view on what I think customers should be asking themselves, and an attempt to relate those back to EVO:RAIL. This whole process began being thrown in at the deep-end speaking to my own colleagues at the VMware Tech Summit EXPO (it’s like an internal only VMworld for SEs/TAMs) and then later on the floor of Solutions Exchange at VMworld. Incidentally, that was my first bit of booth babe duty proper in my life. I left the event with a tremendous amount of respect in the folks who do these huge EXPO style shows. It’s incredibly hard work, but for me it was made easy by the sheer volume of interest in EVO:RAIL. I was glad I wasn’t in one of those small booths on a periphery of the event doing 10am-5pm straight for four days!
One of my early jokes about convergence and hyper-convergence was that despite the name, as an industry no one has ‘converged’ the same way either from a technology standpoint or delivery model. In short the converged market place is ironically a very divergent one or hyper-divergent. Geddit?
Q. What’s the architecture model for your vendor’s (hyper)convergence?
If you look at the converged marketplace you will find VCE vBlock, NetApp/Cisco FlexPod, HP Matrix, Dell vStart and so on. Each of those solutions are constructed very differently, and so is the go-to-market strategy. A converged model is basically one that brings together what I like to call the three S’ of Servers/Switches/Storage, each as discrete physical components, albeit made much easier to deploy than buying all the bits separately and rigging them together.
Similarly, on the surface hyper-converged systems all look very similar, but the servers and storage are delivered within the context of a single chassis, where a combination of local HDD/SSDs are brought together to provide the storage for virtual machines. This model generally benefits from an overall lower entry-price point, and allows you to scale-out (for computer AND storage) by adding more appliances. Interestingly most hyper-converged solutions do not bundle a physical switch – that’s something you are supposed to have already. It’s well worth spending time researching the network requirements both in terms of bandwidth and features required on that physical switch before jumping in with both feet. [More about these network requirements in later posts!]
For me the big architecture difference between hyper-converged vendors is that most hyper-converged systems deploy some type of “Controller” VM that resides on each physical appliance – call it a virtual appliance if you like – running on top of the physical box. This “Controller” VM is granted access to the underlying physical storage, and by hook or by crook it then presents the storage back in a loop-back fashion – not just to the host it’s running on, but the entire cluster. This has to be done using a protocol recognizable by the hypervisor (in my case vSphere), and most commonly this is as an NFS export, although there are some vendors who are using iSCSI – and some that support SMB because they support Microsoft Hyper-V (Boo, hiss…).
In contrast EVO:RAIL uses VMware’s Virtual SAN which is embedded into the vSphere platform, and resides in the VMware ESXi kernel. Just to be crystal clear there’s no “Controller” VM in EVO:RAIL. Once the EVO:RAIL configuration is completed you have precisely same version of vSphere, vCenter, ESX, and Virtual SAN you would have if you’d taken the longer route of building your own VSAN from the HCL or if you’d acquired a VSAN Ready-node and manually configured and installed and configured all the software.
Now, I’m NOT saying that one architecture is better than other, in the current climate that would be incendiary. What I am saying is they are DIFFERENT. And customers will need to look at these different approaches and decide for themselves which offers the best match for their needs and requirements – balanced against the simplicity of deployment and support. Without beating my chest too much about VMware, I think you’ll know which one I think is the more elegant approach. 🙂
Q. Does your hyper-convergence vendor seek to complement or supplant your existing infrastructure?
I’m uneasy with the idea that hyper-convergence can produce the “Jesus Appliance” that is the panacea for all your problems. I’ve been around in the industry long enough that every 3 or 4 years there’s a magic pill to solve all datacenter problems. The reality is that most new game-changing technologies generally fix a set of challenges – only to add new ones for the business to wrestle with. Such is life.
Personally, I think it’s a mistake to paint the converged “Three S” model of Servers/Switches/Storage out of the equation altogether. For a certain set of workloads or customer requirements I think there’s still a truckload of value in the model. I see hyper-convergence as complementing a customer’s existing infrastructure model rather than utterly supplanting it (although there will be use cases where it can and does). That includes both building your three stack model using different vendors or going down the converged route with something like a FlexPod or vBlock.
I’m pleased to say that there is some healthy skepticism and debate out there around hyper-convergence – a good place to start is with a dose of ‘wake up and smell the bacon’. I think Christian Mohn’s article “Opinion: Is Hyper-converged the be-all, end-all? No.” is just the sort of reality check our community is famous for. Christian correctly points out that with the hyper-converged model – as you add more compute you add more storage, or as you add more storage you add more compute. What about a customer who doesn’t consume these resources in equal measure? What about a customer for whom their data footprint is increasing faster than their compute needs? In a way that’s the point of hyper-convergence – it’s meant to simplify your consumption. But if your consumption is more nuanced than hyper-convergence allows for it will it be always the best fit in all cases? There’s a danger (as with all solutions) that if all you have is a hammer, every problem looks like nail.
I found one of the most well-argued and well articulated counter-viewpoints on hyper-convergence is Andy Warfield of CoHo Data Hyperconvergence is a lazy ideal. In fact I’d go so far to say that Andy’s post is one the best-written blog posts I’ve read in a long while. And I’m a coffee drinker. 🙂 If you want a contrasting perspective, then Chuck Hollis’ deconstruction of Storage Swiss The Problem with Storage Swiss Analysis on VSAN is a good read. If you’re looking for an independent comparison of differing hyper-converged solutions, Trevor Potts’ summary on Spiceworks is both an interesting and amusing read. Just to be clear, I don’t agree with everything these guys say, but they make for interesting reading for precisely that reason. I like people who make me think, and make me laugh. Generally, I’m against the concept of mindless agreement – I think it leads to dangerous tunnel vision.
As for myself, I had a conversation with a customer at VMworld that might illustrate my point better. They are a large holding company in the US, with a couple of very densely populated datacenters using the three S’ model – but they have over 300 subsidiaries dotted around the country. Historically, the subsidiaries have been “managed” as separate entities. They’ve even had their own budget to blow on IT resources, and for legal purposes they’ve had clear blue water from the holding company. Unfortunately, this has lead to non-standard configurations at each of the subsidiaries; lots of re-inventing the wheel and wider support issues, as each subsidiary makes its own decisions. The subsidiaries are used to having their own gear on site and they regard that as an important “asset” (a concept I find difficult to understand, but I’ve learn to bend with the wind when it comes to ideological held beliefs – for me anything that devalues and depreciates over time can hardly be classed as an asset). But it makes support a nightmare, and every other month gear at one or other subsidiary is expiring – and they keep on asking the holding company for advice about what to do in the future…
Now one solution would be for the holding company to become a private cloud provider – hosting each subsidiary in a multi-tenancy cloud. However, there are some upfront cost issues to consider here, and it breaks the history of on-premise(s) resources. Additionally, some subsidiaries could chose to ignore this private cloud altogether, and carry on spending their money upgrading local gear. And to the holding company there is a perceived risk of what happens if the subsidiaries don’t buy in… What if you build a cloud and the ‘owner-occupiers’ chose to stay in their own homes, rather than ‘renting’ an apartment in the sky?
So for them a combination of the Three-S convergence at the corporate datacenter with hyper-convergence at the subsidiary is a model that works well. The on-ramp is not too steep. The holding company could offer EVO:RAIL as a solution to the subsidiaries – whilst allowing the subsidiary to select its preferred supplier out of many Qualified EVO:RAIL Partners (QEPs). Over time as one subsidiary’s gear goes out of date, the holding company can offer them a choice of EVO:RAIL – and over time that’s how they will get a consistently configured environment, whilst the subsidiary holds on to what they value. Yes, this sounds like I’m promoting EVO:RAIL, but hey I’m in that team so you would expect me to say that! 🙂
The point of this little story is that it demonstrates simplistic “SAN Killer” statements are to be treated with an air of caution. There’s plenty of life in the old three S’ dog yet. It’s like Pat Gelsinger said at VMworld – so far IT has been all about either/or equations, and that’s a model that leads to some unhappy compromises in the datacenter. At VMware we want to allow customers to have their cake and eat it – once size does not fit all. J
Q. Does the hyper-converged vendors business model resonate with you.
I’m not a big fan of touting the “vendor lock-in” line. It’s generally associated with FUD arguments. Occasionally, I’ve heard a customer raise concerns about vendor lock-in with VMware, only to ignore the other places where they seem totally comfortable with being ‘shackled’ to another vendor. Ah, they say – that’s part of our “strategy”, as if by labeling something a “strategy” you can automagically make it disappear in a puff of logic and verbal gymnastics. 🙂
What I do think interesting is that 99% of the hyper-converged vendors are the sole supplier of their appliance. After all it’s much more challenging to develop a partner led model, rather than merely signing up channel-partners. If you’re a company with the sort of influence and contacts that VMware has, it can be done. It’s not the first time that VMware has helped create multi-vendor programs that bring technology to market – Site Recovery Manager, VAAI, and VASA are all great examples. But more importantly I believe that, by not getting into the hardware game directly with EVO:RAIL – VMware has created a competitive market place – both between the partners, and with the rest of the hyper-converged industry. I’d go so far as to say that it isn’t VMware who is competing directly in the hyper-converged market, but its partners, and I think this is brilliant for customers. Competition drives innovation and in the main makes for more interesting negotiations on price. And it always is negotiation isn’t it? I mean if you’re buying 1,000 hyper-converged appliances you’d expect negotiation wouldn’t you? If you are buying just one – well that’s a different matter…
But putting that all aside I think the main benefit of the EVO:RAIL business model is being able to deal with truly global hardware providers who have been in the game for decades. For some customers it means they can also leverage their existing relationships with the likes of Dell, EMC , HP, Fujitsu and so on.
Q. Are your hyper-converged appliance and hypervisor licenses included in one single SKU?
You might be surprised to know that some hyper-converged appliances ship with no hypervisor at all. Instead you have to use secondary tools to get the hypervisor on to the unit. To be fair, from what I’ve heard this is a relatively easy and trivial step – but it is an additional step nonetheless. Other vendors install an evaluation copy of VMware ESXi, and leave it to the customer to bring licenses to the table. That’s fine if you have an ELA or enough CPU socket licenses left in vCenter to just add the host, and license it. In contrast EVO:RAIL is an all-inclusive licensing model. The box ships with vSphere Enterprise Plus 5.5 U2 and includes licenses needed for vCenter, ESX, VMware VSAN and LogInsight. License the appliance, and you’ve licensed the entire stack. The setup should take less than 15 mins, if all is in place from a networking perspective. It’s a deployment model that is dead simple, and could potentially redefine how folks acquire the vSphere platform.
[This part actually comes from a previous blog post – but I felt repeating again here works.]
The truth is installing VMware ESXi is totally trivial event – the fun starts in the post-configuration phases. That’s why I think EVO:RAIL will be successful. Looking back over the years, I’ve personally done a lot of automation. It started with simple “bash” shell scripts in ESX 2.x, and then evolved to using the UDA to install ESXi 3.x from a PXE boot environment with the esxcfg- commands. About the time of vSphere4 I moved away from bash shell scripting to building out environments with PowerCLI. It has literally hundreds of cmdlets and can handle not just ESXi but vCenter configuration too. I burned a lot of time building and testing these various deployment methods. Now, EVO:RAIL has come along allows me to do that in less than 15mins.
For me that doesn’t mean all that previous hard work has been for naught – after all I believe there are still legs in other models for delivering infrastructure. I still will still support those methods, but what EVO:RAIL has delivered is much more automated, standardized and simpler method of doing the same thing. As former independent it always sort of irked me that VMware didn’t have a pre-packaged, shrink-wrapped method of putting down vSphere, and it was sort of left to the community to develop its own methodology. The trouble with that is everyone has his or her own personal taste on how that should be done. And we all know that leads to things being not standard between organizations, and in some cases within organizations. Despite ITIL and change-management controls, configuration drift from one BU or geo to another is a reality for many organizations. I see EVO:RAIL as offering not just a hyper-converged consumption model, but an opportunity to standardize – especially for companies with lots of branch offices and remote locations.