Welcome!

Blog Feed Post

Hardware By Hand

Raid_controllerhttp://www.stacki.com/wp-content/uploads/2016/05/Raid_controller-768x437... 768w, http://www.stacki.com/wp-content/uploads/2016/05/Raid_controller-1024x58... 1024w, http://www.stacki.com/wp-content/uploads/2016/05/Raid_controller.jpg 1265w" sizes="(max-width: 300px) 100vw, 300px" />The world of datacenter automation is a complex one. We all knew that going into it, but faced with the reality of varying hardware, operating systems, networks, security tools, programming languages, app servers… The list goes on, while we have conquered the general provisioning tasks, we have now to figure out what to do with the complex fiddly bits.

Recently on Twitter, we were discussing how hardware incompatibilities are a thorn in the side of any automation project, including DevOps. Let’s say you’ve set up a state-of-the-art provisioning system that will spin up VMs, containers, or a physical server and provision application A on it. So far, so good. We want easy, stable, repeatable processes to roll out software and services.

But then you have a piece of hardware (be it a server going to support VMware, a compute node added to OpenStack, or a stand-alone app server) that is out of date. Out of date means a lot of things, we’ll focus on two killers – in need of a BIOS update, and in need of a RAID card firmware update. There are others, firmware on networking and SSD cards (particularly specialized ones), hardware replacement – like swapping out RAID cards – etc. We’ll stick with two to explore the problems, but it’s good to keep in mind this is in the context of a broader topic.

The problem with this bit of the automation domain is that everybody does it a little differently, and most still are designed to have a person be sitting there to perform the update (there are entire pages dedicated to the different methods of SSD firmware upgrade alone). Which is very contrarian to full datacenter automation.

The level of support provided by server provisioning vendors ranges from none (you are expected to continue doing this configuration by hand) to nascent. While support for some specific products/families has been around in this server provisioning software or that, none has a comprehensive solution to hardware upgrade/configuration. That makes sense, both because there is no standard in the hardware interfaces, and because there is a steady change in the interfaces required. SSD firmware updating wasn’t a concern until the last few years, for example.

So what do you do? Two things. First, add support for your preferred hardware to your checklist for server provisioning. While you may not need a comprehensive solution, getting to 70 or 80% of affected servers using automation saves a massive amount of time. Secondly, demand that your chosen vendor be doing more in this space. While it is unfair to take server provisioning vendors to task for the hardware vendors’ inability to create a usable standard, the server provisioning vendors know the market they are in, and are well aware that they need to be working on it. Alternative to these two options is going 100% public cloud or hosted servers. Then server provisioning – at least the hardware part of server provisioning – is no longer necessary because your cloud provider will be dealing with hardware. Internal cloud and virtualization still leave the physical servers in your domain, and thus still an issue to wrestle with.

At this point in time, the “hardware layer” of server provisioning is the least well served. While that will no doubt get better over time, it underpins other server provisioning tools, which assume the hardware is ready to rock. Some tools can do configuration of this or that part – the stacki project, of which I am a member, can do about 70% of RAID cards (based upon Avago market share estimated at 80% by an analyst friend, and differences in individual card configuration options). I offer this tidbit up because the tool has some of the broadest (if not the broadest) coverage in the market, and yet isn’t that universal yet.

Longer term, a standardized API to make hardware more easily programmable should be the goal. Underlying implementation can still be proprietary, but a standard interface that can be used to update or configure all products in a given space should be what we’re shooting for.

Sound fantastical? Think I’m dreaming? Well network hardware vendors are slowly moving to programmability and away from secret knowledge. Five years ago that would have been seen as fantastical also. That long ago, some vendors like F5 Networks had APIs, but the APIs mirrored their command lines, keeping the product-specific knowledge requirements. Their current REST API is better organized and less atomic. So I contend that if networking companies can move that direction, so can RAID and BIOS vendors.

Until then, tools that do a “good enough” job of it, and the occasional manual intervention will be the best we can manage, but it’s still far better than doing it all by hand.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
FinTechs use the cloud to operate at the speed and scale of digital financial activity, but are often hindered by the complexity of managing security and compliance in the cloud. In his session at 20th Cloud Expo, Sesh Murthy, co-founder and CTO of Cloud Raxak, showed how proactive and automated cloud security enables FinTechs to leverage the cloud to achieve their business goals. Through business-driven cloud security, FinTechs can speed time-to-market, diminish risk and costs, maintain continu...
CIOs and those charged with running IT Operations are challenged to deliver secure, audited, and reliable compute environments for the applications and data for the business. Behind the scenes these tasks are often accomplished by following onerous time-consuming processes and often the management of these environments and processes will be outsourced to multiple IT service providers. In addition, the division of work is often siloed into traditional "towers" that are not well integrated for cro...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
In his session at @ThingsExpo, Dr. Robert Cohen, an economist and senior fellow at the Economic Strategy Institute, presented the findings of a series of six detailed case studies of how large corporations are implementing IoT. The session explored how IoT has improved their economic performance, had major impacts on business models and resulted in impressive ROIs. The companies covered span manufacturing and services firms. He also explored servicification, how manufacturing firms shift from se...
"I will be talking about ChatOps and ChatOps as a way to solve some problems in the DevOps space," explained Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in t...
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MyS...
IoT solutions exploit operational data generated by Internet-connected smart “things” for the purpose of gaining operational insight and producing “better outcomes” (for example, create new business models, eliminate unscheduled maintenance, etc.). The explosive proliferation of IoT solutions will result in an exponential growth in the volume of IoT data, precipitating significant Information Governance issues: who owns the IoT data, what are the rights/duties of IoT solutions adopters towards t...
Digital transformation has increased the pace of business creating a productivity divide between the technology haves and have nots. Managing financial information on spreadsheets and piecing together insight from numerous disconnected systems is no longer an option. Rapid market changes and aggressive competition are motivating business leaders to reevaluate legacy technology investments in search of modern technologies to achieve greater agility, reduced costs and organizational efficiencies. ...
Amazon started as an online bookseller 20 years ago. Since then, it has evolved into a technology juggernaut that has disrupted multiple markets and industries and touches many aspects of our lives. It is a relentless technology and business model innovator driving disruption throughout numerous ecosystems. Amazon’s AWS revenues alone are approaching $16B a year making it one of the largest IT companies in the world. With dominant offerings in Cloud, IoT, eCommerce, Big Data, AI, Digital Assista...
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.