The enterprise is under increasing pressure to drive cost down and complexity out of the data environment, and for many, that means taking a hard look at storage.
Even before the cloud and colocation services started pulling data across geographically distributed infrastructure, storage at most data centers was an organized mess at best. The typical practice of deploying point solutions for specific workloads has led to an amalgam of formats, file architectures, networking solutions and, lately, media types as simple disk drives give way to flash, RAM and various on-server memory solutions.
The first step in bringing order to this chaos is to conduct a comprehensive storage analysis. As a practical matter, this is a complex proposition, leading all but the most tech-savvy organizations to seek outside help. Even with an experienced team of outside consultants, IT leaders should have a good understanding of what a proper storage analysis entails and how best to interpret the results.
The analysis should start with a thorough examination of existing storage resources and the data loads they support. This should not only cover basic storage and storage networking systems but should also include a fine-grained analysis of capacity levels, resource utilization histories, traffic patterns and a host of other factors.
Storage managers should take a look at the tools that are currently providing visibility into performance metrics as well. Is this an integrated management stack or a disparate collection of tools? Do they provide full visibility across the storage environment? Are performance metrics enforced in a universal fashion, or are there different rules for different data silos?
What sort of data is currently under management? Structured and unstructured data require dramatically different storage environments, and even typical application load requirements can vary in terms of volume, latency, availability and the like. This will affect the type and location of storage resources as well as leveraging various storage functions such as search, deduplication, compression, replication and backup strategies.
Once there is a complete understanding of storage in its present form, it’s time to start planning changes to bring it more inline with the data loads of the future. In most cases, this will encompass low latency performance as well as extreme scalability, universal access and rock-solid reliability and availability.
To accomplish these goals, the enterprise needs to take a hard look at the options for both physical and virtual data infrastructure to determine the optimal means of supporting business objectives. These will likely include the cloud, on-premise infrastructure, converged/hyperconverged systems, SAN/NAS architectures and a mix of flash, disk and perhaps even tape-based storage arrays.
It is important to realize that no single architecture provides optimal support for all applications, so the new data environment will have to encompass a great deal of flexibility to achieve the level of performance demanded of a services-based business model. In all likelihood, the enterprise will employ a hierarchical storage management strategy that will dynamically shift data across multiple storage tiers as its value changes due to age, access frequency and integration with other data sets.
The enterprise storage environment should also reflect the fundamental shift in the way data ecosystems will be provisioned in the future. That means IT will be largely responsible for devising a science-based approach to storage infrastructure that can be easily understood by non-technical decision-makers.
All About the Data
Since data is quickly transitioning from support mechanism to a key revenue driver in the enterprise, IT executives should recognize that improving the storage environment is about improving the value of data rather than deploying the latest and greatest technology. Updating storage infrastructure should, therefore, focus on producing a customized ecosystem that improves data retention and streamlines management overhead, while making data more available to stakeholders. At the same time, it should stress improved security and lower overall capital and operating costs.
This is a tall order, but with the right plan in place and a clear understanding of where storage is now and where it needs to be in the near future, the enterprise will be well on the way to turning raw data into actionable intelligence.