Welcome!

Blog Feed Post

Better SecOps with Incident Management

The threat landscape is expanding at a crazy pace. There are new vulnerabilities released every day, and the amount of servers, applications, and endpoints for ITOps to manage is continually growing. These threats are also growing more potent and frequent, as a recent spate of global ransomware attacks have seen perpetrators extort thousands of dollars. Experts believe that they’re often a ruse that mask attempts to destroy data.

As organizations adopt bimodal ITOps methodologies in order to be more agile, avoiding incidents and increasing security can pose quite a challenge. Some new challenges include leveraging containers and public cloud resources, managing security incidents across these separate data domains, and working entirely new sets of pseudo-admin users who have access to key resources. To enable full stack visibility and incident resolution for the ever-expanding demands on ITOps, a multifaceted strategy to SecOps is required. In fact, I tend to think of SecOps incident management as a necessary combination in order to build a truly secure environment that is both actionable and visible. 

Phase 1: Stop the Threat

First and foremost, reducing the complexity of your SecOps stack will help you maintain actionability while enforcing your SecOps policy. To put it simply, thwart the attack and notify your ITOps team that it needs to remediate. Simplicity is key when reducing the noise of your security alerts and incidents so you can focus on the signals that truly matter. SecOps practices advise that teams leverage a built-in stopwatch to react as quickly as possible and ensure threats are stopped before they do damage to production SLA’s and critical data. The best examples of this severity is when networks and systems are exposed to Zero-Day Threats or ransomware. In these cases, the key is to build a strategy around stopping and preventing exposure to massive threats while issuing alerts to your incident management system. In the case of crypto-ransomware, such as Cryptolocker and Cryptowall, the goal is to leverage tools that prevent the ransomware from engaging the threat (Stage 2 of the below infographic from Sophos), thereby preventing the handshake and averting the crypto infection.

We can then ensure that firewalls, endpoints, third party security monitoring tools, and other relevant data sources are piped into a central incident management solution. This way, SecOps and ITOps can be immediately notified and equipped with the data and workflows required for effective investigation and remediation of high-priority issues. Using effective security tools remains crucial for the success of managing your security incidents.

Phase 2 -> Incident Management and Remediation

The ability to not just detect and notify, but also enrich, escalate, and facilitate remediation and future prevention of issues are equally as important in best practice, end-to-end security incident lifecycle management. Again, to accomplish this full stack visibility, you’ll want to integrate and aggregate all of your security systems into a central incident management solution. For example, configure your firewalls and network devices to aggregate information into your monitoring platform by leveraging SNMP traps/queries, as well as integrating syslog servers to send all security incidents to these sources.

When configuring your firewall and network syslogging, you can save a significant amount of time and reduce alert fatigue by configuring thresholds for warning and critical alerts versus info and debug alerts. Depending on your vendor, thresholding can vary. However, with SNMP, filtering the OID to disregard information-based and debug alerts while permitting alerts from warning and critical status messages, ensures that only high-priority alerts get sent to your incident management system.

With syslogging, you can set more granular logging conditions, but the key here is to keep the noise down and only notify on specific conditions. Once you’ve aggregated these events into your monitoring system, you can establish a framework to enriches the alerts with actionable information and routes them to your team to remediate threats.

Syslogging can be valuable for a few reasons. Not only does it capture detailed information on the security and the network data flowing into your monitoring systems, it can also facilitate intrusion detection and prevention as well as threat intelligence. Instead of piping your syslog directly into a monitoring system, you also have the option to send your syslog data into a third party intrusion analysis system like AlienVault or LogRhythm to increase your intrusion visibility and enrich your logging data, creating actionable alerts. Then you can send those alerts to your incident management system (such as PagerDuty) so you can group related symptoms, understand root cause, escalate to the right expert, remediate with the right context, and view and construct analytics and postmortems to improve future security incident response.

  • Bottom Line: Leverage security tools to actually stop the threat
  • Baseline Monitoring: Establish a baseline monitoring and alerting policy
  • Enrichment: Leverage third party tools to enrich your data and threat intelligence
  • Incident Management: Gain full stack visibility and ensure issues are prioritized, routed and escalated. Improve time to resolution with workflows and analytics

Finally, the same framework can be implemented for organizations with hybrid cloud or public cloud resources, although you will need to leverage different third party tools to analyze and enrich your visibility and alerting. For example, leveraging Azure Alerts when leveraging Microsoft Cloud or AWS Cloud Watch when utilizing Amazon’s cloud will allow you to configure similar thresholding and noise reduction with your public cloud server monitoring and alerting. The good news is that there are also third party tools such as Evident.io and Threat Stack that will conveniently perform security-focused analyses across your cloud infrastructure, for anyone with an agile, public, hybrid, or bimodal ITOps strategy.

Whatever suite of tools and systems you prefer to leverage when designing full stack incident management processes that fit your SecOps team, the fundamentals of simplicity, visibility, noise reduction, and actionability remain paramount to success. ITOps and SecOps teams are in very similar positions in which the demands of the business often conflict with the ability of these teams to ensure secure and efficient access across an ever-growing list of devices, services, and other endpoints.

To learn more about best practices for security incident response, check out PagerDuty’s open-sourced documentation, which we use internally. You’ll get an actionable checklist and insights on how to cut off attack vectors, assemble your response team, deal with compromised data, and much more. We hope that these resources will give you a head start in building a solid framework for optimizing SecOps with effective incident management, as that will be your recipe for success.

The post Better SecOps with Incident Management appeared first on PagerDuty.

Read the original blog entry...

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...