Enterprise Data Storage:
Storage is often an afterthought when it comes to purchasing a new IT system. Users spend extensive time analyzing the purchase of a CPU, network, applications, and database - then simply accept whatever storage comes from the vendor. Yet, the successful access and management of your data depends significantly on the storage! |
An enterprise storage strategy | With today's new challenges for
information management, data storage can no longer be handled as just another computer peripheral. It must be considered a distinct, value-added resource in its own right. Data storage should be treated as a unified, company-wide asset, one which contains, protects, manages, and delivers essential information that creates business value and impacts corporate productivity and responsiveness. The benefits will be highest, and costs lowest, when a coherent data storage strategy is consistently defined and implemented throughout the enterprise. |
Enterprise storage: the new approach
How can the data enterprise storage issue be addressed? We can place a faster, larger storage device in the middle of the data center quagmire, but no matter how fast the disk spins, no matter how great its capacity, it is not going to solve the problem.
The issue is not simply getting the data onto a disk in the data center; that was the problem of the past. Today's paramount IT issue is publishing this data effectively, making it available to various users within the company for revenue generation and competitive advantage. The problem is consolidating the data, and then getting it out of the data center in an easy-to-use, effective way.
This involves two actions: information processing and information management. A data center has many value-added information processing functions. For many companies, retail stores and airlines, for example, online transaction processing means value-added processing and brings profits to the corporation. For other companies, such as telecommunications firms, batch processing of billing information can affect many millions of dollars of cash flow daily. Or the ease of access to centralized information by clients in a tiered client/server application can mean enhanced decision making by business unit managers. Any enterprise storage solution needs to enhance the effectiveness of these information processing/management activities.
If a business adds a disk subsystem that can harness the power of a legacy system to make it 40%, or even 20%, faster, then that business could delay upgrading the system CPU and achieve other operational efficiencies that save on costs. In a typical enterprise, this is measured in the hundreds of thousands of dollars.
If IT managers think more innovatively, enhanced functionality could mean shorter batch processing times, thus allowing extended online transaction processing (OLTP) hours and longer business hours. Or, through reduced response times for OLTP processing, this increased processing ability may allow companies to meet customers in new ways and new places. This could generate additional revenues measured in the millions of dollars.
How can such value-added information processing improvements be made? One way is by removing the need for information management within the CPU platform. With simple disk systems, for example, the host CPU needs to provide RAID (redundant array of inexpensive disks) protection capability, either by simple mirroring (RAID-1) or more complex parity protection (RAID-5). Over time, this processing has moved onto the disk subsystem, removing the burden from the CPU. With an integrated disk-cache subsystem and efficient cache memory algorithms, an enterprise storage system can provide much faster response times than available from disk alone - another example of information management migrating to the disk subsystem.
A cross-platform solution
However, such intelligent controllers are not sufficient, since they are platform-specific. A single controller that works only with one platform offers no investment protection. Enterprise storage, on the other hand, creates a shared, common information storage and management subsystem that can be used by CPUs throughout the enterprise - legacy, open, and PC-based systems
. The functional lifespan of a given CPU is becoming shorter and shorter, but the lifespan of the data remains long. Consequently, an application can quickly move from mainframe legacy to open platform, or from one open platform to another. With enterprise storage, the storage investment cascades from one platform to another as needed, and enables a single storage subsystem to be used by multiple heterogeneous platforms.
Enterprise storage ensures that a business does not need to stop and remake its storage decisions every time there is a change in the organization. The same storage system, the same information management, can be used regardless of CPU platform.
How many times can a manager afford to revisit the company's storage decisions? Each time delays time-to- market, costs revenue, and means lost competitive advantage. IT managers have enough difficult decisions to make concerning the data center quagmire over the next 36 months or 60 months without having to constantly readdress the information storage and management architecture decision.
In some cases, the information storage and management architecture decision forms the basis for the entire data center, and CPU vendors must certify that their products will work with the enterprise storage in order to compete. Data center managers are taking this approach because of another compelling reason for enterprise storage, one that mitigates the effect of information management housekeeping functions on information processing. While enterprise storage can be used to accelerate the value-added activities of a data center, it can also minimize the effects of such housekeeping functions as disaster recovery, data transfers between heterogeneous platforms, and data backup tasks, on OLTP and batch processing.
Enterprise storage is information-centric. By consolidating the information into a common storage subsystem, it provides a single view of information protection, a single view of information sharing, and a single view of information management. Managing pieces and pockets of information throughout the enterprise, often using different approaches, is costly and inefficient. By using a single approach, managers can ensure the consistency of the information management. Some managers have stated that more than 30% of their resources - including MIPs on their processors - are solely dedicated to moving information around the enterprise. Nearly a third of their resources dedicated to non-value-added functions!
With information consolidated into a single subsystem, managers can start to relieve the processors and network of some information management functions. For example, to provide disaster recovery or data center migration functions, IT managers have used various CPU-to-CPU copy or backup schemes. But these plans are often wasteful (from the CPU perspective) and require extended offline OLTP processing. With an enterprise storage solution, this information protection function can be migrated to the disk subsystem itself, using remote mirroring functionality in both a campus-wide and extended distance approach.
Also, by utilizing multiple mirrored copies of the data, enterprise storage can provide separately addressable point-in-time copies of the data for tasks, such as data backup and data extract for decision support systems like data warehouses. With the data located in a single subsystem, enterprise storage will enable various levels of data sharing in the future. For example, to free the network from bandwidth-consuming bulk data transfers and/or network backup, the internal cache bus structure of the subsystem can be used as a network between the various attached CPUs. With innovation advances in exporting data to heterogeneous platforms, IT managers will be able to move data between different hosts and databases.
The enterprise storage layer
In truth, the client-server architecture may be experiencing another change: the addition of an Enterprise Storage Layer. With the advent of fiber channel technology and direct network-attached storage, capable of handling NFS (network file storage) and SMB (server message block) file-access protocols, storage can now be geographically separated from the host CPU. The enterprise itself will undergo a change; there will be a fiber channel network between the CPU and the enterprise storage (which acts as part storage and part network), and a direct connection to the user network for unstructured file data to be stored directly in the storage subsystem.
Regardless of the application (enterprise OLTP, decision support, Internet, fileserver, or groupware), the data will be stored in a centralized location that can be shared, managed, and protected using a common system. Moreover, the Enterprise Storage Layer will experience a tremendous amount of growth as data is migrated from desktop systems.
Within a few years, the data discussion will no longer be about the amount of mainframe, open, or PC storage. Instead, it will be the choice between consolidated enterprise storage and distributed desktop storage.
Stephen Terlizzi is senior product manager of storage architects EMC2 Japan KK. He can be contacted at terlizzi_stephen@ isus.emc.com.
The data center quagmire | A quagmire is a problem that doesn't appear to have a solution, like quicksand. The average IT executive has legacy applications that are
experiencing serious problems. For example, legacy systems are being taxed by workloads they were never meant to handle and are expected to provide functionality and flexibility that they were never meant to
deliver. Moreover, legacy systems are requiring increasing levels of maintenance. For instance, many businesses are facing a "Year 2000 problem" with their applications. In order to save on storage and memory space in the past, application designers used only two digits to represent the last two numbers of the year. However, with the new century approaching, applications now need to be modified or rewritten and tested to recognize 21st century dates, while ongoing operations are maintained. Yet many organizations do not have the time and resources to adequately deal with legacy systems because of pressures to develop and introduce new applications, such as enterprise OLTP, data warehousing, groupware, and Internet applications. These new applications drive revenue, and drive competitive advantage. Many managers, therefore, are precariously balancing the ongoing support of legacy systems with the time-to-market of new applications. As if these pressures were not enough, the business environment is changing rapidly. A single phone call from the CEO announcing a new partnership, merger, or acquisition could send the business in a new direction, and the IT organization must make this change happen quickly. And business line managers must continually look for new ways and new places to meet customers: on the Internet, through video-on-demand, in-store kiosks, etc. In short, IT managers are being tasked with making efficient, useful, cost-effective data storage systems a reality. |