Welcome!

Blog Feed Post

HAL in the Datacenter

Full Datacenter Automation – minus the AI (for now)

Full Datacenter Automation – minus the AI (for now)

In Arthur C Clark’s 2001: A Space Odyssey, HAL 9000 was the AI. Everyone knows that. But more relevant to todays’ automation efforts in the datacenter, HAL also controlled all the systems on the space ship. That, in the long run, is what we’re headed for with operations automation and DevOps. We want not just management tools that are automated, but orchestration tools that are too, and the more automation the better. In the end, it will require a human to replace physical hardware, but all software related issues from the beginning of installation to the end of life for the app would ideally be automated with as little human intervention as possible.

This still makes some operations folks antsy. Frankly, it shouldn’t. While the ability to quickly deploy systems, upgrade systems, and fix some common problems – all automated – will cut the hours invested in deploying and configuring tools, it will not cut hours. The reason is simple… Look at what else we’re doing right now. Cloud is being integrated into the DC, meaning we have double the networks and security systems to maintain, if the cloud is internal we have another whole layer of application to maintain – today that is easier with VMware than OpenStack, but neither is free in terms of man-hours, and we’re working on continuous integration. That’s before you are doing anything specific to your business/market. But enough, I digress.

The thing about server provisioning, no matter how thorough it is, is that application provisioning is a separate step. If you’ve been reading along with my thoughts on this topic here and at DevOps.com, then you know this already.

By extension, the thing about Full Layered Application Provisioning – FLAP as presented at DevOps.com – is that it too leaves us short. You have a server, fully configured. You have an application (or part of an application in clustered services scenarios, or multiple applications), and it is ready to rock the world. Totally configured, everything on that box from the RAID card to the App GUI is installed and configured… But the infrastructure hasn’t been touched.

This is a problem most of the marketplace recognizes. If you look close at application provisioning tools like Puppet and Chef, you can see they are integrating networking infrastructure configuration into application provisioning through partnerships.

This is a good thing, but it is not at all clear that application provisioning is the right location in the operations automation stack to put this type of configuration. While you could make a case that the application owner knows what they need in terms of security, network, and remote disk, you could also make the case that because these things are limited resources placed on the corporate network for shared use, a higher level than the application provisioning tool should be handling these configurations.

Interestingly, in many of these cases the real work is integrating the automation of the tool in question with your overall processes. One of the last projects I worked on at F5 Networks was to call their BIG-IQ product’s APIs and tell it to do what I needed when I needed it as part of a larger project. This is pretty standard for the orchestration piece of the automation puzzle, and the existence of these types of APIs explain the move by application provisioning vendors to put this control into their systems.

Let’s stop for a moment and talk about what we need to have in place to build a HAL like control layer. There is a combination of pieces that can be divided numerous ways (and let me tell you, writing this I worked on graphics or whiteboard drawings to reflect most of those ways). Assume in the following diagrams that we are not simply talking about deployment, we are also talking about upgrades and re-deployment to recover post hardware error or software instability. That simplifies the drawings enough that I can fit them into a blog, and is useful for our discussion.

Assume further that in this diagram there is also a “Public Cloud” section that merely has the top part of the private cloud – with no infrastructure on site and in the realm of operations’ responsibility, it is the part beginning with “Instance spin up”, but otherwise the required steps are the same.

In an attempt to keep this image consumable, you will notice that I ignored the differences in configuration between VM and Container provisioning. There are differences, but there are more similarities from a spin-up perspective – server provisioning products like Cobbler and Stacki treat both as servers, for example. Truth be told, from an operations perspective containers lie somewhere between cloud (pick a pre-built image and spin it up) and VM (install an image and the apps that run on it). It should have its own stack in the diagram, but you can see it was getting rather tight, and since it shares traits with the other two, I decided to lump it in with one of them.

Those who are familiar with Cloud and VM both will take issue with my use of “OS Provisioning” for both – they use entirely different mechanisms to achieve that step – but they both do need to have configuration done on the OS, so I chose to include the step. A cloud image needs to have its IP and connections and storage all set up, some pre-built cloud images actually take a lot of post-spin-up configuration, depending upon the purpose of the image and what technologies it incorporates. So while on the VM side provisioning includes OS install and configuration, on the cloud side it involves image spin up and configuration.

And even this image doesn’t give us the full picture for data center automation. If we shrink this image down to a box, we then can use the following to depict the overall architecture of a total automation solution:

In this diagram, “Server Provisioning” is the entire first diagram of this blog, and the other boxes are external items that need configuration – NAS or SAN disk creation (or cloud storage allocation), Application security policy and network security configuration, and the overall network (the subnet config, VLAN config and inter-VLAN routing, etc). These things could be kept in the realm of manual automation because they don’t generally change as much as the servers utilizing them, but they can be automated today… The question is if it’s worth it in your environment, and I don’t have those answers, of course, you do.

We’re moving more and more in this direction, where you as an administrator, ops, or devops person will say “New project. Give me X amount of disk, Y ports on a VLAN, apply these security policies, and allocate Z servers to it, two as web servers, the rest as application servers with this engine.” And that will be it, the environment will spin up. Long term the environment will spin up in spite of errors, but short term, the error correction facility will be that subset known in some other great sci fi books as meatware.

What can you do to prepare for this future? Well, the best first step is to get server provisioning (first with hardware and VMs – because they’re basically the same, and someone will always spin up the hardware) down, then get it down with Cloud and Docker. Finally become an expert on one of the application provisioning tools. In essence, the contents of that first diagram are very real today, while the bits added in the second are evolving rapidly as you read this, so work on what’s real today to increase understanding and speed adoption. It helps that doing so will (after implementation) free up some time.

Of course I have my preferences for what you should learn (I DO work for a hardware/server provisioning vendor after all), but I would refer you to my DevOps.com articles linked above for a more balanced look at what might suit your needs if you’re not already started down this path.

The other thing you can do is start to look at logging and monitoring facilities. They will be an integral part of any solution you begin to look at – you cannot resolve problems on systems that just sprouted up on demand unless you can review the logs and see what went wrong. In an increasingly complex environment, that is more true than ever. I’ve seen minor hardware issues bury an entire cluster, and without log analysis, that would have been hard to track down.

It’s getting to be a fun time in the datacenter. Lots of change, thankfully much of it for the better!

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
SYS-CON Events announced today that GrapeUp, the leading provider of rapid product development at the speed of business, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market acr...
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
SYS-CON Events announced today that Enzu will exhibit at SYS-CON's 21st Int\ernational Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their ...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, pane...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing...
SYS-CON Events announced today that Cloud Academy named "Bronze Sponsor" of 21st International Cloud Expo which will take place October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara, CA. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud com...
We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
In his session at Cloud Expo, Alan Winters, an entertainment executive/TV producer turned serial entrepreneur, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to ma...
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists discussed...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.