IT, like many things in life, tends to repeat itself in waves. We have seen this occur with the transition of workloads off mainframe systems and onto different servers for each application. The rise of VMware and virtualization saw these workloads collapse together again on shared hardware. Now we’re seeing evidence of separation as workloads move from on-premises to a variety of cloud platforms and more discrete containers. Applications are also becoming more distributed, allowing the monolithic messes we’ve adopted over the last two decades to fade into the past.
Storage platforms have also seen a similar fluctuation between consolidation and distribution. Virtualization brought on the rapid growth of shared storage platforms to support new high-availability and distributed capabilities. This has been the status quo for the last 10 years, but now we’re seeing a growing trend of organizations purchasing storage for specific projects, avoiding cohabitation with other workloads.
Why is this happening? A few reasons come to mind. In this post I’ll explore several of these trends, their root cause, and how organizations can avoid an operational quagmire.
“I want the workload!” – IT
“You can’t handle the workload!” – App owner
Today, there is a growing view that shared infrastructure can’t effectively deliver all of an application’s requirements. The reason is not always technical (e.g. speed and/or latency). Concerns also include time to delivery, adjacency with other applications, and, perhaps, simply budget.
On the technical side of this issue is the perception that the workload is not always getting everything that it needs. Performance on a shared infrastructure can be unpredictable or frequently impacted by workloads consuming excessive amounts of resources. The noisy neighbor issue has been discussed on this blog numerous times (here and here), but it’s a valid concern, prompting a desire for infrastructure that’s independently procured and managed away from other workloads. The challenges are seemingly exacerbated by workloads being introduced with known stringent requirements, such as VDI or databases.
Additionally, the process side of the issue exists. A team’s reliance on another organization to meet its deadlines often leaves that team with a lack of confidence. This is not entirely fair as each organization is competing with the demands of multiple groups, often on platforms offering little in the way of automation or scalable management. The belief is that by having its own dedicated storage, the organization can move at its own pace and not be limited by the timelines of others.
I feel the need … the need for speed
Another trend we see is the collapsing of operational silos and the traditional skill sets they leveraged into smaller, more horizontally focused teams. As the ability to manage the application platforms simplifies, the storage infrastructure that sits underneath them becomes more of a standard, and as the number of products that tout super-simplified management grows, the number of people involved in the storage-buying decision has increased. SolidFire has embraced this trend in every way we can, from providing a consistent API to our upstream integrations with platforms like VMware, OpenStack, CloudStack, and Docker.
Many teams certainly have the capacity to implement and manage their own team’s storage infrastructure. The reality is that the storage team is still often responsible to some degree for managing and maintaining the storage, regardless of where it is procured.
“Help me … help you!”
The net result of these scenarios is that companies are finding themselves with many islands of storage, often with numerous vendors and considerable overlap in workload profiles. The politics, delivery, and economics have driven the company as a whole to adopt additional complexity with marginal gain. Let’s not forget the resources left stranded like castaways on these islands. That’s money that sits out there for the sake of speed of deployment and sense of ownership.
It does not have to be this way. SolidFire was designed and built to eliminate islands of storage by delivering shared storage infrastructure while eliminating many of the concerns that lead an organization to adopt multiple platforms.
SolidFire’s multi-tenant framework allows storage administrators the ability to deliver storage in a logical and meaningful way to each group.
SolidFire’s Quality of Service (QoS) allows those same administrators the ability to guarantee that each of their tenants, whether teams or BUs, receive the resources they require without concern that a noisy neighbor will step over and impact their workload.
SolidFire’s scale-out architecture allows administrators the ability to grow (or shrink) the cluster as needed without consumer impact or reconfiguration, thus simplifying procurement and implementation as new projects or workloads are onboarded.
SolidFire’s API-driven Element OS means storage services can be delivered through various automation and management platforms without additional storage administration, supplying a timely and consistent experience for all consumers.
SolidFire’s in-line deduplication and compression allows additional workloads to be added to the cluster without losing efficiency, increasing the value returned from the investment. It also means there is less opportunity for valuable resources to be “left on the table” and unused.
Storage islands are not necessary. The future is a consolidation of application workloads, as recently discussed by IDC Analyst Eric Burgener in this video. The needs of every group in your organization can be met quickly and cost-effectively by leveraging SolidFire systems — without sacrificing management simplicity.
Read more about optimizing your data center in our latest whitepaper, The Value of the Next Generation Data Center.