The hyperscale data center segment is growing as the never-ending onslaught of data continues to require new and more agile transport systems. This need is creating a unique challenge for teams tasked with the management of assets and their lifecycles in these dynamic and sometimes dispersed hyperscale data center environments.
Dock to decom refers to asset management from the point at which the equipment is received on the loading dock to throughout its useful lifecycle, all the way through to decommissioning.
Hyperscale is a term usually associated with facilities on a massive scale. It also means an environment that is complex, constantly changing and dependent on scalability as demand grows. Operating hyperscale facilities at peak efficiency and using resources as responsibly as possible brings on a whole new infrastructure management era, especially when asset lifecycle management (dock to decom) is taken into account.
Dock to decom refers to asset management from the point at which the equipment is received on the loading dock throughout its useful lifecycle, all the way through to decommissioning. Due to the sheer scale of hyperscale environments and their evolving growth, these data centers constantly require streamlined workflows and rapid application response times to support hundreds of instantaneous changes. Without a complete and consistent dock to decom strategy, hyperscale organizations can’t operate efficiently. However, data center infrastructure management, (DCIM) solutions are helping and quickly becoming a vital part of hyperscale data center operations.
Figure 1: DOCK TO DECOM
With a DCIM system in place from the day an asset enters the system, its location is tracked so management is aware of:
- What the asset is connected to;
- How much power it is drawing;
- The work it is performing;
- Its temperature, its redundancy;
- The risks involved if the particular piece of equipment should fail.
In addition, a DCIM solution allows data center operators to manage changes to the asset, including physical location, as well as its place in the workflow. A rich set of information is provided, including who has worked on the equipment, as well as when and what type of work was performed.
It is the data center manager’s job to know:
- Where the asset is in its lifecycle;
- The maintenance schedule;
- Warranty information (i.e., whether keeping the particular asset will be more expensive than replacing it);
- When it should be decommissioned.
Managing complex, highly dynamic data center environments requires a distributed enterprise architecture that can scale across multiple physical servers for a real-time, interactive user experience. The DCIM solution should offer features to ensure:
- Rapid application response times under massive load to enable deployment of new applications and equipment with high levels of service while reducing IT costs;
- Efficient scalability to enable services, such as backup, restore, resiliency, redundancy and load balancing;
- Connectors to support integration with existing management frameworks for virtualization, configuration management database (CMDB) and service desk and inventory management applications for goods receiving.
- A “single pane of glass” view of the entire organization that enables a consistent approach to managing assets, capacity and energy use.
This type of management is particularly challenging with facilities that are geographically dispersed. Each location typically has its own system for tracking assets and infrastructure, and if these systems don’t interoperate, each department or location might become silos, making it difficult to extract and combine data. A unified view across all facilities is needed to prevent misunderstanding, errors and repetition of work.
Additionally, the DCIM solution must provide the following:
- The ability to handle the sheer volume of equipment and constant change; with large numbers of assets involved that are constantly changing and on the move, all the data needs to be accurately tracked, hence, the DCIM solution must be flexible and uniquely nimble.
- Real-time data for informed decision-making, greater energy efficiency and disaster prevention.
- A holistic, real-time view of what is going on in the entire data center to allow operators to make the best business decisions quickly. In addition, it allows for adjustments to avoid situations such as an overload or overheating incident. Real-time data also simplifies the deployment of new assets or the rearrangement of equipment for greater efficiency, as well as stranded capacity discovery.
- Discovery of the power draw of each piece of equipment for load balancing.
- Long-term trending data for future capacity planning; trending data is important for advanced planning to accommodate future needs, as well as analysis of what equipment may be underutilized so consolidation or retirement of these assets may be considered. Capital expenditures can be delayed or even canceled if present space, power and cooling capacity can be used to its most efficient potential.
- The ability to communicate with all types of equipment, regardless of age or manufacturer; another unique challenge present in the hyperscale environment is the number and variety of types of equipment from different suppliers and vintage. The DCIM solution should be able to communicate with and gather data from a greater variety of sources, either natively or through the use of connectors.
A data driven world requires data centers to operate at a rate and scale never before imagined. Data center operators must meet these demands if they are to remain competitive and be able to increase the speed at which they can deploy new applications, all while remaining cost-effective. Hyperscale data centers are fast becoming the norm and solutions that manage and optimize these facilities must meet these specific demands to ensure rapid application response times at any scale.