Where Technology Meets Business

This whole thing started 2 years ago when a friend told me to try a service that delivered all of the ingredients to make a great home-cooked meal. Seemed like a great idea; food arrives with everything you need to cook a start to finish meal. All you are required to do is chop veggies and throw the contents of the box into a pan. I have so many family members to cook for that I always end up overbuying some ingredient in an effort to feed the masses. Whether it was an herb, an interesting cheese, or some other fringe item – there was always something left on the kitchen counter after the rest had been consumed.

Several weeks ago my wife convinced me to try one of these delivery services. At first I was skeptical that it would work for us. With our busy schedules I was unsure if we would both be at home to enjoy it. We signed up, and I’m not looking back. It is so easy… we can have a gourmet meal in a half hour and avoid the two hours it would have taken researching recipes, going to the store, collecting ingredients, and finally cleaning up the leftover ingredients produced by recipes that call for a sprig of thyme despite the fact that you can’t purchase thyme by the sprig. I guess thyme waits for no man. Sorry, I’m a dad and that is a very bad dad joke.

What does this have to do with hyper-converged infrastructure? Simply put, we have been shopping at the IT grocery store way too long, purchasing memory, cores, and storage independently. The natural result of our sizing exercises coupled with applications’ ever changing requirements has left us either with a pound of cheese on the counter or utilizing our google-fu to find a substitute for molasses.

Hyper-converged infrastructure lets me purchase exactly what I need today without worrying too much about what may be left over. Running out of memory in the course of three years is fairly uncommon thanks to the balancing and ballooning techniques common to hypervisors. Running out of CPU isn’t that common, because density is dependent on core count in virtualized environments. If I know I’ll have 60 VMs on a box for 3 years and maintain that density, in the overwhelming majority of cases I’ll be fine. But what about storage?

Storage is the ‘milk’ of the datacenter. It is something we buy in large quantities and never lasts as long as we expect – the one thing that we can count on to grow in the datacenter when we turn off the lights at night. Storage is like death and taxes… it will always grow and loom as an impending problem. “I don’t need this anymore, so I think I’ll just delete it” said no one ever. How do we combat that sprawl with hyper-converged infrastructure?

The good news is that we are living in the age of software defined storage (SDS). SDS lets us do two very special things. When purchasing hyper-converged compute nodes, storage is included in the box along with the CPU, memory, cheese, thyme, and anything else we need. From day one we are enabled to enjoy a delicious experience. SDS allows me to take leftover bits from other meals and aggregate, or (switching the metaphor) we use them to cover any storage needs that may exist on other hosts. It turns storage into leftovers that NEVER go bad in the fridge.

Software defined storage does something else unique to the world of hyper-converged infrastructure. It eliminates traditional SANs with all of their hardware, rack space, power drops, fibre channel switches, monitoring software, maintenance, and everything else that should have been left at the store. Inside your shiny HCI box is just the stuff you need… and you’re ready to start cooking with apps and virtualization. But… along comes the problem of storage sprawl and the normal growth in capacity usage over the life of a well utilized HCI appliance. At day one there is likely to be plenty of unused capacity, but at day 1,095 you may have run out of capacity. This is where software defined storage starts generating some real value, coupled with the fact that storage density per SSD is increasing all of the time and getting cheaper.

With products like ScaleIO from Dell EMC and vSAN from VMware, you can take that unused capacity and share it from day one. With vSAN you can share it with other virtualization hosts within the cluster. In the case of ScaleIO, you can share it across cluster boundaries or with non-VMware hosts. Now we’re talking value, because a fresh HCI box isn’t just helping the VMs sitting on that shiny new appliance – it is a workhorse to enable efficiency and performance across the infrastructure by leveraging its unused storage. Pooling creates the value, and the result is a scalable HCI and SDS infrastructure that is nearly impossible to outgrow.

Practically speaking, what happens when you run out of capacity in HCI is that software defined storage comes in to manage the gap. If you’re in a scale out compute environment serviced by HCI, you are likely purchasing new boxes (with all of that day one extra capacity) from time to time as legacy servers are refreshed. Software defined storage lets you balance storage performance and capacity across all of your HCI assets, so you are never really over buying… or under buying. It makes your capacity planning much more consistent, and keeps the TCO to a minimum.

Even if you aren’t purchasing new HCI boxes on a regular basis, you can still add storage from other supported hosts or even dedicated storage nodes. The new capacity joins the pool and is much easier than managing the testing, procurement, installation, and configuration of a physical hardware array. This is storage made easy, and eliminates any associated risk when moving to hyper-converged.

If you’re looking for an HCI solution, don’t settle for just internal storage. Check out the real upgrade that ScaleIO from Dell EMC presents. It allows you to manage, utilize, and extend the life of every HCI box in the infrastructure. Don’t take my word for it. Download a free demo and see how many IOPs you can service from your existing hardware. Comments are moderated on this site, but if you comment with the IOPs numbers you were able to produce I will allow the posts and send a free box dinner to the person who generates the most heroic numbers. Who’s hungry?

If you want a closer look at how ScaleIO changes datacenter TCO, take a look at the following video.  It is long… and well worth your time if you’re looking to improve performance, reduce risk, and lower the costs of your storage infrastructure

 

 

 

Agility has found its way into the IT sales lexicon, and is used to describe everything from unattended installs to flexible purchases.  I’ve started tuning it out unless there are practical examples provided, so this article will focus on usable, quantifiable agility.

What is agility?  One source defines it as “the power of moving quickly and easily.”  How does software definition allow you to move quickly and easily through your day?

First, let’s look at server virtualization. One of the advantages VMware’s software defined infrastructure provided was the ability to make changes.  A few examples:

  • Give a workload more CPU, RAM, or NICs
  • Easily improve the availability and fault tolerance capability
  • Buy a completely different vendor’s hardware without changing the workload
  • Clone a VM as a backup before rolling out a patch

The list could go on and on.  Rather than an intangible buzzword, agility became real and practical, saving admins and IT decision makers time and complexity. Software defined storage like Dell EMC’s ScaleIO does the same thing.  Think of all of the decisions storage buyers have weighed in the last several years when considering physical SANs:

  • Drive rotational speed?
  • Flash or Spinning media?
  • SLC, MLC, TLC, Consumer or Enterprise Grade SSD?
  • Scale Out or Scale Up?
  • Large vs. Small Scaling Increments
  • Disruptive vs. Non-Disruptive Controller Upgrades
  • Migration Process and Impact
  • Synchronous or Asynchronous Replication?
  • Active/Passive or Active/Active Controllers?
  • FC Port Speed
  • And now… NVMe Capable?

With software defined storage, none of these decisions would have mattered.  If you became comfortable with consumer grade SSD, you throw them into the next server and it becomes part of the pool.  Need more capacity?  Add a few more nodes, or locally attached storage.  Want NVMe?  Get it today and throw it into the pool without waiting on “NVMe Ready” arrays.

For each of the bullet points I listed, I can name at least half a dozen customers who spent time in POCs testing different physical storage arrays head to head… sometimes for 6 months to a year before reaching a decision.  Software defined storage would have eliminated those POCs and the hard choices required at the end.  More than that, the storage infrastructure would have been flexible and future proof, just like we are enjoying through server virtualization.  This is real agility – not a buzzword but practical benefits that change how you can operate and do business.

In closing, the wave is here.  Software defined storage literally is everything you like about VMware, but for storage. Lots of options exist including ScaleIO, vSAN and Elastic Cloud Storage. Maybe you aren’t quite ready to put everything on software defined.  I won’t argue with that, but I encourage you to examine the adoption curve of server virtualization and see if you can start today and see the benefits with the lower end workloads that are probably clogging up your physical SANs.

Try it, and you’ll likely be as hooked as I was.  Find me at DellEMC world next year.  If I get a “ScaleIO changed my life” and a hug – I’ll hug you back.  Software definition changed my life once, and it is doing it again.

Missed the first in the series? Click here.

This is probably the oldest concept that was applied to virtualization, but everyone knows that buying in bulk yields better prices.  In the retail space we’ve seen companies like Costco and Amazon emerge, and the prices are great because they’re pooling their purchases together to get a lower price, and you’re buying a pooled resource which gives you a better price.  Seems like a very simple concept, but it is part of the fundamental magic of software definition.

With VMware we could pool the resources of multiple servers without resorting to complex hardware clustering configurations.  Compute became one bucket instead of lots of small buckets.  It became easier to buy, scale, modify, and manage.  Sure, a VMware hosts was more expensive due to core counts and memory, but the 95% of a server that had been historically underutilized could be re-purposed from ‘boat anchor’ to a beast of pooled performance.  Pooling of these underutilized resources didn’t make it cost comparable… it became far cheaper.  If a VM needed even more performance, you could add virtual RAM, storage, or CPU.  Easy.  This was because the resources were pooled.

Also, infrastructure planning became far easier, since we were now sizing for a pool rather than every individual workload. Previously a request for 10 additional hosts was an exercise in sizing, procurement, waiting, provisioning, etc. With server virtualization there was no more measuring trends in capacity and performance per workload. We had just one thing to look at – pool size and usage. A group that needed 10 new hosts could have them in a few minutes, not a few weeks. That’s the power of the pool!

Pooling also gave us mind blowing performance and consistency.  When VMware released Distributed Resource Scheduler busy hosts could have VMs automatically balance to less busy nodes in the cluster.  Performance and consistency were separated from physical hardware and became a function of pooling.

Software Defined Storage like Dell EMC’s ScaleIO delivers the same value.

Think of all of the storage distributed across your server infrastructure.  What is it doing?  Probably hosting a boot volume and not much else.  Let’s say for example that you purchase a Dell R730 with 8 SSDs at 480GB each.  That’s 3.8 TB of raw and largely untapped capacity.  What if you have 10 of them?  Now we have 38 TB of raw pooled capacity.  And 100 hosts?  At this point we’re sitting on top of a latent storage workhorse of 380 TB.  You probably see where I’m going with this. Compare what the same 380 TB of raw capacity would cost in an all flash array.  More than the server attached SSD list price of $399K for 100 fully populated Dell R730s?  Probably.  Keep in mind that the software comes at a price, but will it be less than a 380 TB all flash SAN cluster?  Most likely – and with fewer management points and 100 storage controllers instead of 2-16.

Pooling is part of the magic to managing costs.  You don’t need enormous quantities of locally attached storage.  The distributed nature of them means that when pooled you have not only a much larger shared resource, but you can manage performance, availability and consistency across a larger portion of your infrastructure.  A component outage in a traditional SAN creates a performance or availability impact.  A single node outage in a software defined storage node is a much smaller issue, and the bulk of the infrastructure is insulated from the impact.  More servers are working together to rebalance and recover, automagically.

Aggregating performance, capacity, and resilience creates an antifragile infrastructure. For more on that check out Nassim Taleb’s fantastic book. The net of it is that antifragility is not about tolerating faults – which is where many of physical SANs development aside from media are focused.  Antifragility is about creating an infrastructure that performs well, but performs better than traditional methods when the bad stuff happens.  It loves and thrives on chaos. Bad stuff may be volume based saturation, hardware outages, budgetary restrictions, hardware upgrades, etc.  We all know that the clock is ticking on our next bad stuff happened event.  Software defined storage distributed across compute nodes is a way of making that an alert message instead of an outage.

What about everybody’s favorite feature – Agility?  That is the subject of the next article, and the focus is on practical examples instead of buzzwords.

Missed the first article?  Click Here

Consolidation was the number one reason most companies virtualized.  Reducing the number of managed assets reduced costs significantly.  The operational overhead of running a datacenter with 200K servers was insane.

Before virtualization, most enterprise servers ran at 5% utilization as a byproduct of traditional sizing.  What do I need today, what will I need in three years when I’m due for refresh, and double all of that to avoid customer satisfaction problems.  It was a recipe for datacenter sprawl, wasted floor tiles, lots of support tickets, and wasted power and cooling.

Together, we had to change our mindset about consolidation.  It wasn’t about cramming more databases onto a single server. Consolidation was about breaking away from legacy relationships between workloads and physical hardware and starting to really consolidate with virtual assets.  Arguments about putting “all of our eggs in one basket” became irrelevant, and building a single N+1 virtual (and consolidated) architecture was far easier than implementing 100 N+1 physical versions.

Most customers began with low risk infrastructure hosts, and soon enterprises were virtualizing the majority of workloads.  But what about features? Remember all of this happened before things like VMware DRS, SRM, replication, etc.  Features didn’t matter, because consolidation was the real value. Even if it only worked for 50% of my workloads because features were missing, so what? I still was able to reduce my cost in money and time significantly.

What about Software Defined Storage?  I can’t tell you how many customers I’ve spoken with that buy servers with locally attached storage and then build out a SAN.  Or lots of SANs.  “Here’s the standard array config for my infrastructure workloads.  I need 10 arrays.”  It makes sense to have an enterprise class storage platform with the benefits of LUNs, replication, and single point of management.  But what if you could add those benefits incrementally to the servers themselves with the local storage you already purchased and some smart software?  Suddenly you can stop buying 10 arrays to deliver that value, and go from many management points to just a few. Finally – meaningful storage consolidation for the masses!

The customer I mentioned early in this series still put some workloads on bare metal, but the smart thing they did was a ‘virtualization first’ policy and addressed everything else as an exception.  What if instead of consolidating 10 small arrays into 3 big arrays, you could turn them into a single physical array (or even zero physical arrays) and deliver the rest of the storage requirements with some software?  You can.  Check out ScaleIO.  That is what it does, and what it has been doing for a few years now.

A ‘software defined storage first’ policy lowers the overall storage spend and frees time and budget to address the outliers who require hardware SAN advanced features (synchronous replication, etc) at a much lower cost.

While consolidating arrays sounds great, don’t you need lots of servers for software defined storage like ScaleIO or vSAN to work well?  That’s where Pooling comes in, and is the next subject in the series.

Missed the first article?  Click Here

Abstraction is defined as the process of considering something independently of its associations, attributes, or concrete accompaniments.  To put it another way, it is the process of stepping away from the trees so you can see the forest.

My introduction to virtualization came when working for a middleware startup.  To demonstrate our product I traveled with three laptops.  I was the one dragging three laptops and a suitcase through the airport, arriving home with bruises on my shoulders.  If I lived in an infomercial I would have been the guy throwing down his gear and screaming “there has got to be a better way!”

There was.  I walked into the office of one of our senior architects and he showed me VMware Workstation 1.  The heavens opened, I saw light, heard otherworldly singing, then ran to my office to buy it online.  Soon I was traveling with 1 laptop, 3 VMs, and a firm understanding of the value of abstraction.  I didn’t need hardware, I needed software to better utilize something I already owned.  I was determined to work for VMware, and not long afterwards I found myself selling the benefits of abstraction to enterprise customers.

VMware abstracted, or freed workloads from the traditional concrete accompaniments of hardware.  This created a platform in which the bare metal was just plumbing.  Hardware vendor no longer mattered.  Admins could scale infrastructure in a few minutes by adding a VM rather than waiting on the long procurement process and custom configuration/sizing to acquire a new bare metal server.  For the first time in the x86 world it just didn’t matter where the workload was running.

With software defined storage like Dell EMC’s ScaleIO, you can build a SAN in real time with software using the local storage attached to each server.  This is identical to how VMware allowed us to create a new virtual datacenter with some really smart software.  Mixed server vendor environment?  No problem.  Need more storage?  Add a host to the pool.  Abstraction means that the workload doesn’t care if the SAN is physical or virtual.  You can ditch the physical SAN causing sleepless nights worrying about port saturation, controller performance, Fibre Channel upgrades, etc.  The software provides the abstraction layer, giving you more control and more value from the storage attached to each host in your infrastructure.  SAN vendor, controller architecture, media type, and fibre channel performance become irrelevant.

Some may immediately notice that traditional SANs provide abstraction.  True, for a workload’s direct relationship to media there is abstraction.  LUNs are fantastic.  But physical controller architectures still matter – along with media type, port speeds, and supported host interfaces like SAS, SATA, and NVMe.  There are many fixed components of a SAN, and decoupling the storage from underlying hardware in favor of a software defined SAN is a huge step forward to agility – which we will discuss later in the series.

For now, let’s talk about where the quick savings come in software defined storage – Consolidation.

Missed the first article?  Click Here

We’ve heard the hype about software defined storage for years.  Software Defined Storage (SDS) is historically about provisioning and management, but new products like Dell EMC’s ScaleIO, Elastic Cloud Storage, and VMware’s vSAN have taken it to the next level – using appliances, application servers and direct attached storage to create a scalable virtual storage network that can be built and managed dynamically by administrators rather than the previous process of installing hardware storage arrays.  Many IT professionals (aka pragmatists) have adopted a “wait and see” attitude.  The jury is back, and it is a real thing.  Let me correct that; it is a current thing and many are missing a great ride on the biggest IT wave since VMware virtualized x86 servers.

Back in the early days at VMware, I had lots of interesting conversations with customers large and small.  Looking back on those conversations is a case study in the adoption curve of revolutionary technologies.  At times it was unbelievably comical.  One week I heard a story about a smaller customer grabbing one of our staffers at a trade show, hugging him, and telling him “VMware changed my life!”  The next week I was in front of a large enterprise customer who stated that virtualization wasn’t ready for prime time and they couldn’t abandon the one workload per bare metal server policy.  “We can’t put all of our eggs in one basket.”  It was a severe case of not seeing the forest for the trees.  A year later they were the biggest VMware customer on the planet, with over 200,000 virtual machines.  What happened?  Did the technology mature?

No, but their understanding of the impact of software definition matured. They stopped looking at the thing we have all been trained to look at, namely feature parity.  IT decisions can’t be made effectively by comparing two lists of checkboxes.  We have to be strategic and look at where the tech can take us, if there is a process to get there, and a vision to go even further.  Once the customer understood the vision, they spent the next 18 months reducing compute costs by 90%, promoting everyone who was associated with the project, and winning awards.  The only sad part is that they could have done that a year earlier and been even further ahead of their industry.  They wasted 90% of their server budget, and I missed out on a year of hugs and “VMware changed my life” affirmations.

Consolidation, Abstraction, Pooling, and Agility are what launched the virtualization wave, and that is exactly what software defined storage doesThis series will examine those four things that software definition enabled and were previously missing from traditional infrastructure. With those four elements at work no amount of hardware based features could compete in their target space – the ghastly volumes of servers that sprawled to support necessary infrastructure workloads.   The same thing is happening today with sprawling storage, and enterprises have traditionally coped with multiple sprawling hardware arrays. Software definition can solve, or at least alleviate much of that pain. Let’s compare the value of software defined infrastructure to software defined storage, and see if it makes sense to jump in now.  Spoiler alert – software defined storage is everything you love about server virtualization… but for storage.

First up:  Abstraction

I’ve spent tons of time looking into this product and formed my own opinions.  From a vendor perspective I’m expected to to be biased, and I am.  I wouldn’t have joined the company if I didn’t believe in the strategy.

To try to “un-bias” my opinion I started calling customers.  Matt Gustafson is an incredibly seasoned storage engineer with loads of experience in EMC and Non-EMC platorms, so when he told me about his experience in setting up Unity I was impressed.  More importantly, the product impressed Matt.  Here is our discussion in its entirety.

Q:  What was the impression of the setup?

A:  Obviously – very simple to setup.  The automated tool that runs on your laptop was genius.  The whole thing took less than 10 minutes.  It takes longer to configure a physical server’s raid controller!

Q:  How was the physical installation?

A:  Super easy.  Oh man, I took my time, followed the detailed installation guides, made sure cable routing was perfect.  It took about an hour.  Just as easy as racking a server.    The packaging was really nice, it looked like something you buy at Best Buy.

Q:  How easy was it?

A:  Well, there are different levels of suck.  Some platforms you need a team of customer engineers.  Some platforms you need at least one if not two.  Some are difficult, but they’re cool so you deal with the pain.  Some are high stress.  This was easy.

Q:  How was the physical setup?

A:  As easy as it comes.  Drop the controller in the rail, add fiber and management ports, add the DAE and you’re done.

Q:  Operationally, how was the process?

A:  Much easier than every EMC platform I’ve used.  Three clicks, no math, no balancing.  This will change the way a lot of people look at flash.  It will totally change our management’s view of flash.  It is no longer the cutting edge people who are interested.  I see it like the smartphone in 2007.  Once people saw the clear advantage there was mass adoption.  Unity will lead the charge – it is so good, so cheap.  Pile them up for general purpose workloads and it will be easy for people without storage teams to manage it.  Wintel and Virtualization teams can manage this with ease.  Keep the fancy stuff for workloads that need it!

Q:  What about automation?

A:  As automation gets better, more and more of what took up a storage admin’s time will be done automatically.  This isn’t like a dual controller array that was hard to manage if you weren’t a storage guy.  If you don’t want people sitting around doing boring storage management stuff, then this is the platform for you.

Q:  What impressed you about the platform’s software stack?

A:  I love the job task execution scheduler.  I can see what it is doing in the background.  That is new in anything I’ve seen in the midrange.

The built in reporting, amazing.  It had the VNX monitoring and reporting capabilities the week it came out.  This is BUILT IN to the product.  I can’t find any missing functionality that I would actually miss.  I’d rather use the built in performance analysis tools than anything on previous non-flash VMAX systems I’ve used.

The VMware integration is really good.  I’ve never seen integration on the array outbound as good as this.  You can spin up datastore, do re-scans… you get ViPR like integration built in, and better than a vCenter plugin.  The VMware admin doesn’t need to know what they should or shouldn’t do before starting a task.

The net of my evaluation is:  this doesn’t feel like a first generation product to me.

 

That concluded our interview.  Matt, thanks so much for providing feedback and allowing me to share our conversation with a slightly larger audience.

After using ViPR in the lab and running it through a series of real world scenarios, I can honestly say that this is as close to the ‘One Ring’ that I’ve seen come from the EMC portfolio.  If you’re looking for a way to fully leverage the EMC portfolio – this is it.

For those not yet indoctrinated, EMC’s ViPR (and Open Source CoprHD) is a management tool layering above multi party storage, networking and compute that allows you to automate standard tasks like provisioning of storage and compute, expansion of storage, VMware host and cluster provisioning, and lots of other operations.  It is an easy, in the can solution for Service Catalog and infrastructure tasks.

From an admin’s perspective it simplifies things greatly.  Just in my lab I’ve had to expand volumes for SQL benchmarks and use cases, provision volumes, hosts, etc.  In my multivendor lab I can use it to provision volumes whether they are EMC, NetApp, Hitachi, etc.  I can provision bare metal hosts and applications dynamically with a few clicks.  Need to add a Z drive to a Windows host?  A simple order form completes and I have no need to log into the storage infrastructure, zone anything, or even connect to Windows to scan for the storage and format the drive.  Everything happens automagically.  Not to belabor a marketing phrase, but this is the power of the portfolio.

Lots of companies are offering flash solutions.  Who is offering a complete out of the box service catalog that enables rich provisioning services in a multi-tenant or multi line of business environment?  EMC.  If you’re looking for an all flash array, my advice is to consider not where it will take you but where it can take you.  All flash arrays can make things faster, but the technology is mature enough now from every vendor that we need to ask, “what comes next?  How do I move my strategic vision forward?”

The answer, at least to me, is that if it doesn’t get me closer to my strategic initiatives — lower OPEX, improved organizational efficiency, users empowered to easily consume IT resources rather than shopping in the shadow IT world — then all flash arrays are not a significant benefit.  With technologies such as ViPR and CoprHD you can use flash as an enabler for your strategic vision, not just a solution for a very tactical performance problem.  I can’t tell you how many customers have chosen EMC all-flash platforms solely for the ViPR functionality and vision.  Yes, third party storage is supported.  But a company with the vision to realize that the average customer is multi-vendor and build tools to help them move their strategic vision forward even when not a completely EMC shop is clearly the thought leader in the space.  Kudos to the ViPR team on putting together a comprehensive multi-vendor solution for EMC customers of every size.

I’ve attached a short demo.  ViPR is available from EMC in high availability configurations with support, or via GitHub in Open Source.

 

Sometimes our job in IT makes us feel like Slim Pickens in Dr. Strangelove – nothing seems to be working and suddenly we are sitting on top of a bomb.  But as Peter Sellers said in the movie, “there’s no point in you getting hysterical at a moment like this!”  On that topic… one of the biggest operational questions I get from customers is: “With data reduction enabled on my array, how can I track and predict physical capacity usage?”  Yes, it is a potential bomb operationally, but there is a tool that can save us from feeling the need to become hysterical.

ViPR SRM is that great tool. ViPR SRM is a management overlay VM that allows me to run deep reports and analytics against everything from storage arrays to host performance. Clearly, having a single pane of glass to monitor this type of data is valuable, but the real value to me in helping customers operationalize flash arrays with data reduction is the predictive analysis.

I have attached a small video so you can see what I’m talking about.

With a few clicks I can see how much storage is provisioned, what the capacity usage trend is, and even dig into the big question – when am I going to need to expand my array?

When I was working in IT this was a big problem. I felt like I had Damocles sword hanging over my head with every storage volume I provisioned. At some point the well would run dry and I would have to buy something new… and wait for the installation. ViPR SRM would have been a great tool for me in those days so I could actual trigger my purchases at the right time so that I never ran out of storage. No more guesswork, just clinical analysis of how my infrastructure is trending, when I’ll run out of space, and when I should pull the trigger on a purchase.

But there is so much more. With ViPR SRM I can predict filesystems that are approaching 90%. I can see hosts and storage controllers that are nearing their CPU and memory thresholds. I can foresee bottlenecks with accuracy.

Here’s the best part for a flash customer. I can through a single pane of glass see volumes that have zero IO regardless of the platform. The folks that have been in highly virtualized environments for a long time will recognize the value of this. Orphaned VMs are a problem. Orphaned storage is far more expensive. Being able to reclaim this capacity will more than pay for the implementation costs of ViPR SRM.

Not convinced? Try it. It is the best way I’ve seen for admins to manage the most expensive components of their infrastructure without having to wait on an alert to tip them off to an issue, and it allows them to proactively recover expensive flash storage from unused systems and VMs.

They are fast. I’m sure you’ve heard a flash vendor say the following: “This is disruptive!” While this is true in a few cases, there’s a more important element often missing in the conversation: Is the disruption you’re selling positive for my business?

Often the claim is made that a product will disrupt the way we use an existing technology. Of course! Anything other than an upgrade of an existing asset brings a level of disruption. If the business doesn’t receive a holistic benefit, the technology is really just an upgrade or at worst intrusive. I’ve experienced “disruptions” that were positive, negative and some that were positive – eventually. Sadly, manufacturers of new products marketed as “disruptive” rarely provide a roadmap that clearly shows how a consumer of that technology can truly innovate and transform their entire business in a positive way… the value is often speculative and eventual. Potentially transformative technologies must bring along with them improved processes, integrations and generally make our lives easier. There is a clear industry learning curve with anything disruptive that can delay business-wide improvements. Why? Because there isn’t a complete solution, only a point solution.

Early in my work with VMware I was assigned to assist global outsourcers with implementing server virtualization. This was a classic example of a disruptive technology. There were indisputable benefits that helped customers reduce cost and streamline operations, in the same way that flash arrays are indisputably faster and easier to manage. As I worked with these companies who went from a few hundred VMs to a few hundred thousand VMs over the course of just 18-24 months there was a lesson that became obvious. The disruptive technology was good by itself, but to fully realize the value there were issues that had to be addressed: integration, workflow tools, management and monitoring interfaces, interfaces to existing infrastructure components, etc. I spent a good deal of time working with these customers to submit an endless list of feature requests to support hardware, networking and systems management tools.

Where am I going with this? A new technology can be obviously valuable. It’s easy to look at a “game changer” tech and understand that it will benefit the business. The challenge is to identify how it will fully benefit the business. In the case of server virtualization there were a lot of questions that remained to be answered as the technology matured. The full value to the business was largely speculative, and dependent on large amounts of integrations that did not exist. Today we have flash storage – another technology that is obviously valuable. The challenge that IT decision makers have is how to evaluate the multiple players in that space and understand where the full value comes from rather than relying on promises or speculative value.

IT purchase decisions spring from identifying a need that isn’t satisfied with the current infrastructure. What’s the problem? Disruptive technologies and the associated speculative value of them have a tendency to disrupt the decision process as much as they will IT operations.

For instance, let’s say a CIO’s issue is that a particular database or application is underperforming due to storage latency. Flash is most likely an easy fix. It is disruptive in that it changes the current equations around cost, implementation, management, performance, etc. Many customers in this position scour the market and find a flash array that is good enough to satisfy the needs of the single problem. There is something missing, though. Flash technology also has the potential to transform businesses – but only if it is integrated with the entire ecosystem of technologies in use. We have the opportunity to improve the app and the business with the same decision. Flash storage, properly implemented, can achieve benefits that are difficult or impossible with other traditional or even hybrid platforms.

1. We can leverage metadata management technologies to reduce storage sprawl – an average database has five copies somewhere in the enterprise for development, analytics or business continuity. Virtual copies reduce cost and complexity, and should be a metadata operation rather than consume expensive flash.
2. Use management integrations to streamline business processes. Put the control of database copying into the hands of the DBAs. Give virtualization admins control of storage provisioning, as data reduction technologies are incredibly effective for VMs. This lets storage admins work on bigger issues and the phone rings less frequently. It can be hard to reach this point politically, but giving control of lower level operations to the IT consumer is a huge cost savings to the storage group and application owners.
3. Make complicated tasks like replicating virtual environments to a business continuity site simple, without additional effort from the storage team or complexity.
4. Leverage metadata and RAM in storage controllers to do the heavy lifting of redundant operations like cloning and copies without impacting production performance. If you’re staging or de-staging metadata to disk or NVRAM, the underlying flash is going to be busy – or at least busier. Performing this operation purely in RAM in the controller lets the storage and application teams perform maintenance operations during business hours without production impact because it doesn’t increase load on the physical flash due to constant staging and de-staging. Quality of life and overtime reductions have a direct impact on the bottom line and can increase usable budget over time.
5. The consistency of performance that the right flash architecture provides enables ideas that we have considered for years but never quite had the tools or infrastructure to accomplish – namely service catalogs. Making IT operations like storage and server provisioning self-service is a fast way to reduce cost, lower risk and improve agility. It is the reason tech shops outsource and a benefit that can easily be owned.

To put it in a nutshell, the evaluation process for flash should be based on where the business is going rather than where an application is going. This is a challenge for most flash vendors in that they develop only an array – yes, an easy to manage array with basic integrations to VMware, replication, etc. – but there are no enterprise level integrations for virtualization, database, and enterprise applications and workflows. This puts buyers in the positon that I found myself in with large consumers of VMware in the early days, namely an endless stream of feature and integration requests that delayed the realization of the full value of that disruptive technology for months or years.

This is exactly why I love working at EMC with the XtremIO all flash array. The integrations and solutions are already there. There are integrations that enable a service catalog at almost zero additional cost. EMC has solutions that allow businesses to clone, copy and perform maintenance during business hours without impacting production workloads. (Believe me, this is unique. If you evaluate a flash vendor please run a heavy workload such as the IDC suggested vdbench kit against an array and monitor how performance changes when you start taking a snapshot every 5-10 minutes.) We have solutions that integrate with enterprise applications like Sharepoint, Exchange, SQL and Oracle so that you realize the full value of the disruption immediately. These tools put the power of repetitive and low level operations into the hands of application administrators rather than a handful of overworked storage admins.

Every customer will always have a pressing need. The challenge technologists face is not solving that need but rather solving it in a way that adds additional value to the business and provides a roadmap to achieving even more value. This makes it easier to get approval from the CFO to make the changes that the business demands. Having a clear roadmap to how flash can be transformative to your business is critical, and that is exactly why EMC stands alone as truly disruptive in the flash market. The integrations and tools are already built and ready to go – your only limit is how quickly you want to implement the tools and generate those savings.

As part of your evaluation process I’d suggest not just looking at XtemIO, or an all flash VMAX. Check out EMC’s AppSync, ViPR controller, and the Virtual Storage Integrator. One of these products is bundled with XtremIO to reduce costs and allow you to experiment with its transformative capabilities; the other two are available at no cost. These have already helped thousands of customers realize the full benefit of flash technology rather than just promising speculative benefits or reducing the volume of calls coming from a DBA or application owner. Ultimately a disruptive technology should change the business, not just storage. And it can! Broadening the conversation around what can be done with flash rather than what might be done with flash makes for a better buying decision and a more valuable and future proof infrastructure.

Here's the latest...

Software Defined Storage makes Hyper-Converged Infrastructure (and Dinner) Better

This whole thing started 2 years ago when a friend told me to try a service that delivered all of the ingredients to make a great home-cooked meal. Seemed like a great idea; food arrives with everything you need to cook a start to finish meal. All you are required…

Agility in Software Defined Storage: No More Buzzwords, Just Results

Agility has found its way into the IT sales lexicon, and is used to describe everything from unattended installs to flexible purchases.  I’ve started tuning it out unless there are practical examples provided, so this article will focus on usable, quantifiable agility. What is agility?  One source defines it as…

Pooling in Software Defined Storage: The Deep End Just Got Deeper

This is probably the oldest concept that was applied to virtualization, but everyone knows that buying in bulk yields better prices.  In the retail space we’ve seen companies like Costco and Amazon emerge, and the prices are great because they’re pooling their purchases together to get a lower price, and…

Consolidation in Software Defined Storage: Less Really Is More

Consolidation was the number one reason most companies virtualized.  Reducing the number of managed assets reduced costs significantly.  The operational overhead of running a datacenter with 200K servers was insane. Before virtualization, most enterprise servers ran at 5% utilization as a byproduct of traditional sizing.  What do I need today,…

Abstraction in Software Defined Storage, or The Forest in Plain Sight

Abstraction is defined as the process of considering something independently of its associations, attributes, or concrete accompaniments.  To put it another way, it is the process of stepping away from the trees so you can see the forest. My introduction to virtualization came when working for a middleware startup.  To…

Wax Your Boards: The Software Defined Storage Wave is Here

We’ve heard the hype about software defined storage for years.  Software Defined Storage (SDS) is historically about provisioning and management, but new products like Dell EMC’s ScaleIO, Elastic Cloud Storage, and VMware’s vSAN have taken it to the next level – using appliances, application servers and direct attached storage to create a scalable…

About JP

Joshua Petty is a presales leader for Dell EMC's Software Defined Storage business unit, and an experienced technologist.