Click here to close now.


Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, @BigDataExpo, SDN Journal

@CloudExpo: Article

Data Center Transformation Advice

Fast-changing demands on data centers drive need for uber data center infrastructure management

Once the province of IT facilities planners, the management and automation of data centers has rapidly grown in scope and importance.

As software-driven data centers have matured and advanced to support unpredictable workloads like hybrid cloud, big data, and mobile applications, the ability to manage and operate that infrastructure efficiently has grown increasingly difficult.

At the same time, as enterprises seek to rationalize their applications and data, centralization and consolidation of data centers has made their management even more critical -- at ever larger scale and density.

So how do enterprise IT operators and planners keep their data centers from spinning out of control despite these new requirements? How can they leverage the best of converged systems and gain increased automation, as well as rapid analysis for improving efficiency?

BriefingsDirect recently posed such questions to two experts from HP Technology Services to explore how new integrated management capabilities are providing the means for better and automated data center infrastructure management (DCIM).

To learn more on how disparate data center resources can be integrated into broader enterprise management capabilities and processes, now join Aaron Carman, HP Worldwide Critical Facilities Strategy Leader, and Steve Wibrew, HP Worldwide IT Management Consulting Strategy and Portfolio Lead. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Learn more about DCIM.]

Here are some excerpts:

Gardner: What’s forcing these changes in data center management and planning and operations? What are these big new requirements? Why is it becoming so difficult?

Carman: In the past, folks were dealing with traditional types of services that were on a traditional type of IT infrastructure. Standard, monolithic-type data centers were designed one-off. In the past few years, with the emergence of cloud and hybrid service delivery, as well as some of the different solutions around convergence like converged infrastructures, the environment has become much more dynamic and complex.

Hybrid services

So, many organizations are trying to grapple with, and deal with, not only the traditional silos that are in place between facilities, IT, and the business, but also deal with how they are going to host and manage hybrid service delivery and what impact that’s going to have on their environment.


It’s not only about what the impact is going to be on rolling out new infrastructure solutions like converged infrastructures from multiple vendors, but how to increasingly provide more flexibility and services to their end users as digital services.

It's become much more complex and it's a little bit harder to manage, because there are many, separate types of tools that they use to manage these environments, and it has continued to increase.

Gardner: Steve, I suppose too that with ITIL v3 and more focus on a service-delivery model, even the very goal of IT has changed.

Wibrew: That's very true. We’re seeing a trend in the change and role of IT to the business. Previously IT was a cost center, an overhead to the business, to deliver the required services. Nowadays, IT is very much the business of an organization, and without IT, most organizations simply cease to function. So IT, its availability and performance, is a critical aspect of the success of the business.

Gardner: What about this additional factor of big data and analysis as applied to IT and IT infrastructure? We’re getting reams and reams of data that needs to be used and managed. Is that part of what you’re dealing with as well?


Wibrew: That’s certainly a very important part of the converged-management solution. There’s been a tremendous explosion in the amount of data, the amount of management information, that's available. If you narrow that down to the management information associated with operating management and supporting data centers from the facility to the applications, to the platforms right up to the services to the business, clearly that's a huge amount of information that’s collected or maintained on a 24×7 basis.

Making good and intelligent decisions on that is quite a challenge for many organizations. Quite often, we would be saying that people still remain in isolated silo teams without good interaction between the different teams. It's a challenge trying to draw that information together so businesses can make intelligent choices based on analytics of that end-to-end information.

Gardner: Aaron, I’ve heard that word "silo" now a few times, siloed teams, siloed infrastructure, and also siloed management of infrastructure. Are we now talking about perhaps a management of management capabilities? Is that part of your story here now?

Added burden

Carman: It is. For the most part, most organizations when faced with trying to manage these different areas, facilities IT and service delivery, have come up with their own set of run books, processes, tools, and methodologies for operating their data center.

When you put that onto an organization, it's just an added burden for them to try to get vendors to work with one another and integrate software tools and solutions. What the folks that provide these solutions have started to realize is that there needs to be an interoperability between these tools. There has never really been a single tool that could do that, except for what has just emerged in the past few years, which is DCIM.

HP really believes that DCIM is a foundational, operational tool that will, when properly integrated into an environment, become the backbone for operational data to traverse from many of the different tools that are used to operate the data center, from IT service management (ITSM), to IT infrastructure management, and the critical facilities management tools.

Gardner: I suppose yet another trend that we’re all grappling with these days is the notion of things moving to as-a-service, on-demand, or even as a cloud technology. Is that the case, too, with DCIM, that people are looking to do this as a service? Are we starting to do this across the hybrid model as well?

Today, clients have a huge amount of choice in terms of how they provision and obtain their IT.

Carman: Yes. These solution providers are looking toward how they can penetrate the market and provide services to all different sizes of organizations. Many of them are looking to a software-as-a-service (SaaS) model to provide DCIM. There has to be a very careful analysis of what type of a licensing model you're going to actually use within your environment to ensure that the type of functionality you're trying to achieve is interoperable with existing management tools. [Learn more about DCIM.]

Wibrew: Today, clients have a huge amount of choice in terms of how they provision and obtain their IT. Obviously, there are the traditional legacy environments and the converged systems and clients operate in their own cloud solutions.

Or maybe they’re even going out to external cloud providers and some interesting dynamics that really do increase the complexity of where they get services from. This needs to be baked into that converged solution around the interoperability and interfacing between multiple systems. So IT is truly a business supporting the organization and providing end-to-end services.

Organizations struggling

Carman: Most organizations are really struggling to introduce DCIM into their environment, since at this point, it’s really viewed as more as a facilities-type tool. The approach from different DCIM providers varies greatly on the functions and features they provide in their tool. Many organizations are struggling just to understand which DCIM product is best for them and how to incorporate into a long term strategy for operations management.

So the services that we brought to market address that specifically, not only from which DCIM tool will be best for their environment, but how it fits strategically into the direction they want to take from hosting their digital services in the future.

Gardner: Steve, I think we should also be careful not to limit the purview of DCIM. This is not just IT. This does include facilities, hybrid and service delivery model, management capabilities. Maybe you could help us put the proper box around DCIM. How far and why does it go or should we narrow it so that it doesn’t become deluded or confused?

Wibrew: Yeah, that’s a very good question, an important one to address. What we’ve seen is what the analysts have predicted. Now is the time, and we’re going to see huge growth in DCIM solutions over the next few years.

DCIM alone is not the end-to-end solution.

DCIM has really been the domain of the facilities team, and there’s traditionally been quite a lack of understanding of what DCIM is all about within the IT infrastructure management team. If you talk to lot of IT specialists, the awareness of DCIM is still quite limited at the moment. So they certainly need to find out more about it and understand the value that DCIM can bring to IT infrastructure management.

I understand that features and functions do vary, and the extent of what DCIM delivers will vary from one product to another. It’s very good certainly around the facilities space in terms of power, cooling, and knowing what’s out on the data center floor. It’s very good at knowing what’s in the rack and how much power and space has been used within the rack.

It’s very good at cable management, the networks, and for storage and the power cabling. The trend is that DCIM will evolve and grow more into the IT management space as well. So it’s becoming very aware of things like server infrastructure and even down to the virtual infrastructure, as well, getting into those domains.

DCIM will typically have work protectabilities for change in activity management. But DCIM alone is not the end-to-end solution, and we realized the importance of the need to integrate it with the full ITSM solutions and platform management solutions. A major focus, over the past few months, is to make sure that the DCIM solutions do integrate very well with the wider IT service-management solutions to provide that integrated end-to-end holistic management solution across the entire data-center ecosystem.

Great variation

Carman: With DCIM being a newer solution within the industry, I want to be very careful about calling folks DCIM specialists. We feel that we have a very great knowledge of the solutions out there. They vary so greatly.

It takes a collaborative team of folks within HP, as well as with the client, to truly understand what they’re trying to achieve. You could even pull it down to what types of use cases they’re trying to achieve for the organization, which tool works best and in interoperability and coordination with the other tools and processes they have.

We have a methodology framework called the Converged Management Framework that focuses on four distinct areas for a optimized solution and strategy for starting with business goals and understanding what the true key performance indicators are and what dashboards are required.

It looks at what the metrics are going to be for measuring success and couples that with understanding organizationally who is responsible for what types of services we provide as an ultimate service to our end user. Most of the time, we’re focusing on the facilities in IT organization. [Learn more about DCIM.]

Also, those need to be aligned to the process and workflows for provisioning services to the end users, supported directly by a system’s reference architecture, which is primarily made up of operational management tools and software. All those need to be supported by one another and purposefully designed, so that you can meet and achieve the goals of the business.

IT infrastructure, right up to services of a business, end to end, is very large and very, very complex.

When you don’t do that, the time it takes for you to deliver services to your end user lengthens and costs money. When you have separate tools that are not referencing single points of data, then you’re spending a lot of time rationalizing and understanding if you have the accurate data in front of you. All this boils down to not only cost but having a resilient operations, knowing that when you’re looking at a particular device or setup devices, you truly understand what it’s providing end to end to your users.

Wibrew: If you think about the possibilities in the management of facilities, the IT infrastructure, right up to services of a business, end-to-end, is very large and very, very complex. We have to break it down into small or more manageable chunks and focus on the key priorities.

Most-important priorities

So we look at the trans-organization, work with them to identify to them what their most important priorities are in terms of their converged-management solution and their journey.

It’s heavily structured around ITSM and ITIL processes, and we’ve identified some great candidates within ITIL for integration between facilities in IT. It’s really a case of working out the prioritized journey for that particular client. Probably one of the most important integrations would be to have a single view of the truth of operational data. So it would be unified asset information.

CMDBs within a configuration management system might be the very first and important integration between the two, because that’s the foundation for other follow-on services until you know what you’ve got, it’s very difficult to plan, what you need in the future in terms of infrastructure.

Another important integration that is now possible with these converged solutions is the integration of power management in terms of energy consumption between the facilities and the IT infrastructure.

These integrated solutions can be more granular, far more dynamic around energy consumption.

If you think about managing the power consumption of things like efficiency of the data center with PoE, generally speaking, in the past, that would be the domain of the facilities team. The IT infrastructure would simply be hosted in the facility.

The IT teams didn’t really care about how much power was used. But these integrated solutions can be more granular, far more dynamic around energy consumption with much more information being collected, not just at a facility level but within the racks and in the power-distribution units (PDUs), and in the blade chassis, right down to individual service.

We can now know what the energy consumption is. We can now incentivize the IT teams to take responsibility for energy management and energy consumption. This is a great way of actually reducing a client’s carbon foot print and energy consumption within the data center through these integrated solutions.

Gardner: Aaron, I suppose another important point to be clear on is that, like many services within HP Technology Services, this is not just designed for HP products. This is an ecumenical approach to whatever is installed in terms of product facility management capability. I wonder if you could explain a bit more HP’s philosophy when it comes to supporting the entire portfolio. [Learn more about DCIM.]

Carman: HP’s professional services we’re offering in this space are really agnostic to the final solution. We understand that a customer has been running their environment for years and has made investments into a lot of different operational tools over the years.

That’s a part of our analysis and methodology, to come in and understand the environment and what the client is trying to achieve. Then we put together a strategy, a roadmap of different products, that will help them achieve their goals that are interoperable.

Next level

We continue to transform them to the next level of abilities or capabilities that they are looking to achieve, especially around how they provision services and help them become, at the end, most likely a cloud-service provider to their end users, where heavy levels of automation are built in, so that they can get digital services to their end users in a much shorter period of time.

Gardner: I realize this is fairly new. It was just on Jan. 23 that HP announced some new services that include converged-management consulting, and that management framework was updated with new technical requirements. You have four new services organized with the management workshop, roadmap, design implementations, and so forth. [Learn more about DCIM.]

So this is fairly new, but Steve Wibrew, is there any instance where you’ve worked with some organization and that some of the really powerful benefits of doing this properly have shown through? Do you have any anecdotes you can recall of an organization that’s done this and maybe some interesting ways that it’s benefited them, maybe unintended consequences?

Data-center transformation

Wibrew: The starting point is to understand what’s there in the first place. I’ve been engaged with many clients where if you ask them about inventory, what’s in the data center, you get totally different answers from different groups of people within the organization. The IT team wants to put more stuff into the data center. The facilities team says, “No more space. We’re full. We can’t do that.”

I found that when you pull this data together from multiple sources and get a consistent feel of the truth, you can start to plan far more accurately and efficiently. Perhaps the lack of space in the data center is because there may be infrastructure that’s sitting there, powered on, and not being utilized by anybody.

It’s a fact that we’re redundant. I’ve had many situations where, in pulling together a consistent inventory, we can get rid of a lot of redundant equipment, allowing space for major initiatives and expansion projects. So there are some examples of the benefits of consolidated inventory and information.

DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

Gardner: As we look a few years out at big-data requirements, hybrid cloud requirements, infrastructure KPIs for service delivery, energy, and carbon pressures? What’s the outlook in terms of doing this, and should we expect that there will be an ongoing demand, but also ongoing and improving return on investments you make, vis-à-vis these consulting services and DCIM?

Carman: Based upon a lot of the challenges that we outlined earlier in the program, we feel that in order to operate efficiently, this type of a future state operational-tools architecture is going to have to be in place, and DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

So more-and-more, with a lot of the challenges of my compute footprint shrinking and having a different requirements that I had in the past, we’re now dealing with a storage or data explosion, where my data center is all filled up with storage files.

As these new demands from the business come down and force organizations onto new types of technology infrastructure platforms they haven’t dealt within the past, it requires them to be much more flexible when they have, in most cases, very inflexible facilities. That’s the strength of DCIM and what it can provide just in that one instance.

But more-and-more, the business is expecting digital services to almost be instant. They want to capitalize on the market at that time. They don't want to wait weeks or months for enterprise IT to provide them with a service to take advantage of a new service offering. So it's forcing folks into operating differently, and that's where converged management is poised to help these customers.

Looking to the future

Gardner: Steve, when you look into your crystal ball and think about how things will be in three to five years, what is it about DCIM rather and some of these services that you think will be most impacting?

Wibrew: I think the trend we're going to see is a far greater adoption of DCIM. It's only deployed in a small number of data centers at the moment. That's going to increase quite dramatically, and this could be a much tighter alignment between how the facilities are run and how the IT infrastructure is operated and supported. It could be far more integrated than it is today.

The roles of IT are going to change, and a lot of the work now is still around design, planning, scripting, and orchestrating. In the future, we're going to see people, almost like a conductor in an orchestra, overseeing the operations within the data center through leading highly automated and optimized processes, which are actually delivered by automated solutions.

Gardner: I benefited greatly in learning more about DCIM on the HP website. There were videos, white-papers, and blog-posts. So, there’s quite a bit of information for those interested in learning more about DCIM. HP Technology Services website was a great resource for me. [Learn more about DCIM.]

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

Latest Stories
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. Migration to cloud shifts computing resources from your data center, which can yield significant advantages provided that the cloud vendor an offer enterprise-grade quality for your application.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driv...
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
In his session at DevOps Summit, Bryan Cantrill, CTO at Joyent, will demonstrate a third path: containers on multi-tenant bare metal that maximizes performance, security, and networking connectivity.
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll sha...
The modern software development landscape consists of best practices and tools that allow teams to deliver software in a near-continuous manner. By adopting a culture of automation, measurement and sharing, the time to ship code has been greatly reduced, allowing for shorter release cycles and quicker feedback from customers and users. Still, with all of these tools and methods, how can teams stay on top of what is taking place across their infrastructure and codebase? Hopping between services a...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...