Welcome!

Related Topics: SYS-CON MEDIA

SYS-CON MEDIA: Blog Feed Post

(Lack of) Patch Management Highlighted in US Congress


According to the former Equifax CEO’s testimony to Congress, one of the primary causes of this now infamous data breach was the company’s failure to patch a critical vulnerability in the open source Apache Struts Web application framework. Equifax also waited a week to scan its network for apps that remained vulnerable.[1]Would you like to appear at the next Congressional hearing on patch management?

Patch management is the process of identifying, acquiring, installing, and verifying patches for products and systems. Patches not only correct security and functionality problems in software and firmware, but they also introduce new, and sometimes mandatory, capabilities into the organization’s IT environment.  It is so useful, the CERT® Coordination Center (CERT®/CC) claims that 95 percent of all network intrusions are avoidable by using proper patch management to keep systems up-to-date.

This nightmare true story and compelling endorsement from CERT®/CC, however, masks the ugly operational patch management implementation complexities. Key enterprise challenges include:
  • Timing, prioritization, and testing of patches often present conflicting requirements. Competitive prioritization of IT resources, business imperative, andbudget limitations often leave patching tasks on the back burner
  • Technical mechanisms and requirements for applying patches may also conflict and may include:
    • Software that updates itself with little or no enterprise input
    • Use of a centralized management tool
    • Third-party patch management applications
    • Negative or unknown interactions with network access control, health check functions, and other similar technologies
    • User initiated manual software updates
    • User-initiatedpatches or version upgrades
  • Typical enterprise heterogeneous environment that includes
    • Unmanaged or user managed hosts
    • Non-standard IT components that require vendor patching or cannot be patched
    • Enterprise owned assets that typically operate on non-enterprise networks
    • Smartphones, tablets, and other mobile devices
    • Patching of rehydrating virtual machines
    • Firmware updates

Piling up on these purely operational tasks are the change management steps associated with:
  • Maintaining current knowledge of available patches;
  • Deciding what patches are appropriate for particular systems;
  • Ensuring proper installation of patches;
  • Testing systems after installation; and
  •  Documenting all procedures and any specific configurations.

This challenge can also be significantly exacerbated in an IT environment that blends legacy, outsourced and cloud service provider resources. Environment heterogeneity and the sheer volume of patches released is why any patching strategy that primarily relies solely on manual implementation is untenable. 


According to the SANS Institute, meeting the patch management challenge requires the creation of a patch management methodology and the automation of that methodology.[2]The methodology itself should include:
  • A detailed inventory of all hardware, operating systems, and applications that exist in the network and the creation of the process to keep the inventory up-to-date.
  • A process to identify vulnerabilities in hardware, operating systems, andapplications.
  • Risk assessment and buy-in from management and business owners.
  • A detailed procedure for testing patches before deployment.
  • A detailed process for deploying patches and service packs, as well as a process for verification of deployment.

As for the automation component, it should deliver an automated, comprehensive server lifecycle approach that can provision and configure software, update patches and implement configurations that can improve security and compliance across physical, virtual and cloud servers.
 
It should also encompass a policy-based approach with support for all major operating systems on physical servers and leading virtualization and cloud platforms. An ability to automate continuous compliance checks and remediate any security or regulatory shortcoming is also paramount. If appropriately implemented, IT Staff should be able to manage patching via a web interface. Having this feature increases server to admin ratio, enhances operational productivity, accelerates audit timelines and reduces incident response latency.

A leading solution in this space is BladeLogic Server Automation by BMC. It was specifically designed to address the dual enterprise requirements of (1) ensuring compliance with rules and regulations and (2) software patching to reduce security vulnerabilities.  In the market for over 10 years, it is a comprehensive server lifecycle automation solution that helps organizations provision and configure software, update patches and configurations to improve security and compliance across physical, virtual and cloud servers. Advanced capabilities include script automation, compliance tracking and the ability to stage and test patches before committing them. The latter feature is used to copy patch bundles to the targeted servers before maintenance windows open.The full-function suite integrates with change management systems to facilitate change record creation. Vulnerability management and remediation are automated by importing vulnerability management scan data from vendors like Qualys, Tenable and Rapid 7, and mapping the vulnerabilities back to underlying patches in BladeLogic.

Secure IT operations start with the identification and prioritization of critical vulnerabilities paired with the capability to deliver multi-tier remediation.  These reinforcing goals are why an advance patch automation solution is a “must have” for today’s modern enterprise.



This post is brought to you by BMC and IDG. The views and opinions expressed herein are those of the author and do not necessarily represent the views and opinions of BMC.





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)



Read the original blog entry...

More Stories By Kevin Jackson

Kevin Jackson, founder of the GovCloud Network, is an independent technology and business consultant specializing in mission critical solutions. He has served in various senior management positions including VP & GM Cloud Services NJVC, Worldwide Sales Executive for IBM and VP Program Management Office at JP Morgan Chase. His formal education includes MSEE (Computer Engineering), MA National Security & Strategic Studies and a BS Aerospace Engineering. Jackson graduated from the United States Naval Academy in 1979 and retired from the US Navy earning specialties in Space Systems Engineering, Airborne Logistics and Airborne Command and Control. He also served with the National Reconnaissance Office, Operational Support Office, providing tactical support to Navy and Marine Corps forces worldwide. Kevin is the founder and author of “Cloud Musings”, a widely followed blog that focuses on the use of cloud computing by the Federal government. He is also the editor and founder of “Government Cloud Computing” electronic magazine, published at Ulitzer.com. To set up an appointment CLICK HERE

Latest Stories
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Le...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
"Cloud computing is certainly changing how people consume storage, how they use it, and what they use it for. It's also making people rethink how they architect their environment," stated Brad Winett, Senior Technologist for DDN Storage, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with bu...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...