Where Technology Meets Business

Software Defined Storage makes Hyper-Converged Infrastructure (and Dinner) Better

Home  /  Software Defined Storage  /  Software Defined Storage makes Hyper-Converged Infrastructure (and Dinner) Better

This whole thing started 2 years ago when a friend told me to try a service that delivered all of the ingredients to make a great home-cooked meal. Seemed like a great idea; food arrives with everything you need to cook a start to finish meal. All you are required to do is chop veggies and throw the contents of the box into a pan. I have so many family members to cook for that I always end up overbuying some ingredient in an effort to feed the masses. Whether it was an herb, an interesting cheese, or some other fringe item – there was always something left on the kitchen counter after the rest had been consumed.

Several weeks ago my wife convinced me to try one of these delivery services. At first I was skeptical that it would work for us. With our busy schedules I was unsure if we would both be at home to enjoy it. We signed up, and I’m not looking back. It is so easy… we can have a gourmet meal in a half hour and avoid the two hours it would have taken researching recipes, going to the store, collecting ingredients, and finally cleaning up the leftover ingredients produced by recipes that call for a sprig of thyme despite the fact that you can’t purchase thyme by the sprig. I guess thyme waits for no man. Sorry, I’m a dad and that is a very bad dad joke.

What does this have to do with hyper-converged infrastructure? Simply put, we have been shopping at the IT grocery store way too long, purchasing memory, cores, and storage independently. The natural result of our sizing exercises coupled with applications’ ever changing requirements has left us either with a pound of cheese on the counter or utilizing our google-fu to find a substitute for molasses.

Hyper-converged infrastructure lets me purchase exactly what I need today without worrying too much about what may be left over. Running out of memory in the course of three years is fairly uncommon thanks to the balancing and ballooning techniques common to hypervisors. Running out of CPU isn’t that common, because density is dependent on core count in virtualized environments. If I know I’ll have 60 VMs on a box for 3 years and maintain that density, in the overwhelming majority of cases I’ll be fine. But what about storage?

Storage is the ‘milk’ of the datacenter. It is something we buy in large quantities and never lasts as long as we expect – the one thing that we can count on to grow in the datacenter when we turn off the lights at night. Storage is like death and taxes… it will always grow and loom as an impending problem. “I don’t need this anymore, so I think I’ll just delete it” said no one ever. How do we combat that sprawl with hyper-converged infrastructure?

The good news is that we are living in the age of software defined storage (SDS). SDS lets us do two very special things. When purchasing hyper-converged compute nodes, storage is included in the box along with the CPU, memory, cheese, thyme, and anything else we need. From day one we are enabled to enjoy a delicious experience. SDS allows me to take leftover bits from other meals and aggregate, or (switching the metaphor) we use them to cover any storage needs that may exist on other hosts. It turns storage into leftovers that NEVER go bad in the fridge.

Software defined storage does something else unique to the world of hyper-converged infrastructure. It eliminates traditional SANs with all of their hardware, rack space, power drops, fibre channel switches, monitoring software, maintenance, and everything else that should have been left at the store. Inside your shiny HCI box is just the stuff you need… and you’re ready to start cooking with apps and virtualization. But… along comes the problem of storage sprawl and the normal growth in capacity usage over the life of a well utilized HCI appliance. At day one there is likely to be plenty of unused capacity, but at day 1,095 you may have run out of capacity. This is where software defined storage starts generating some real value, coupled with the fact that storage density per SSD is increasing all of the time and getting cheaper.

With products like ScaleIO from Dell EMC and vSAN from VMware, you can take that unused capacity and share it from day one. With vSAN you can share it with other virtualization hosts within the cluster. In the case of ScaleIO, you can share it across cluster boundaries or with non-VMware hosts. Now we’re talking value, because a fresh HCI box isn’t just helping the VMs sitting on that shiny new appliance – it is a workhorse to enable efficiency and performance across the infrastructure by leveraging its unused storage. Pooling creates the value, and the result is a scalable HCI and SDS infrastructure that is nearly impossible to outgrow.

Practically speaking, what happens when you run out of capacity in HCI is that software defined storage comes in to manage the gap. If you’re in a scale out compute environment serviced by HCI, you are likely purchasing new boxes (with all of that day one extra capacity) from time to time as legacy servers are refreshed. Software defined storage lets you balance storage performance and capacity across all of your HCI assets, so you are never really over buying… or under buying. It makes your capacity planning much more consistent, and keeps the TCO to a minimum.

Even if you aren’t purchasing new HCI boxes on a regular basis, you can still add storage from other supported hosts or even dedicated storage nodes. The new capacity joins the pool and is much easier than managing the testing, procurement, installation, and configuration of a physical hardware array. This is storage made easy, and eliminates any associated risk when moving to hyper-converged.

If you’re looking for an HCI solution, don’t settle for just internal storage. Check out the real upgrade that ScaleIO from Dell EMC presents. It allows you to manage, utilize, and extend the life of every HCI box in the infrastructure. Don’t take my word for it. Download a free demo and see how many IOPs you can service from your existing hardware. Comments are moderated on this site, but if you comment with the IOPs numbers you were able to produce I will allow the posts and send a free box dinner to the person who generates the most heroic numbers. Who’s hungry?

If you want a closer look at how ScaleIO changes datacenter TCO, take a look at the following video.  It is long… and well worth your time if you’re looking to improve performance, reduce risk, and lower the costs of your storage infrastructure

 

 

 


Leave a Reply

Your email address will not be published. Required fields are marked *