Click here to close now.




















Welcome!

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Containers Expo Blog, Cloud Security, SDN Journal

@DevOpsSummit: Blog Feed Post

The Intersection of Security and SDN

If you cross log analysis with infrastructure integration you get some interesting security capabilities

A lot of security-minded folks immediately pack up their bags and go home when you start talking about automating anything in the security infrastructure. Automating changes to data center firewalls, for example, seem to elicit a reaction akin not unlike that to a suggestion to putting an unpatched Windows machine directly on the public Internet.

At RSA yesterday I happened to see a variety of booths with a focus on .. .logs. That isn't surprising as log analysis is used across the data center and across domains for a variety of reasons. It's one of the ways databases are replicated, it's part of compiling access audit reports and it's absolutely one of the ways in which intrusions attempts can be detected.

And that's cool. Log analysis for intrusion detection is a good thing. But what if it could be better?

What if we started considering operationalizing the process of acting on events raised by log analysis?

One of the promises of SDN is agility through programmability. The idea is that because the data path is "programmable" it can be modified at any time by the control plane using an API. In this way, SDN-enabled architectures can respond in real time to conditions on the network impacting applications. Usually this focuses on performance but there's no reason it couldn't 'be applied to security, as well.

If you're using a log analysis tool capable of performing said analysis in near-time, and the analysis results in suspicious activity, there's no reason it couldn't inform a controller of some kind on the network, which in turn could easily decide to enable infrastructure capabilities across the network. Perhaps to start capturing the flow, or injecting a more advanced inspection service (malware detection perhaps) into the service chain for the application.

In the service provider world, it's well understood that the requirement in traditional architectures to force flows through all services is inefficient. It increases the cost of the service and requires scaling every single service along with subscriber growth. Service providers are turning to service chaining and traffic steering as a means to more efficiently use only those services that are applicable, rather than the entire chain.

While enterprise organizations for the most part aren't going to adopt service provider architectures, they can learn from then the value inherent in more dynamic network and service topologies. Does every request and response need to go through every security service? Or are some only truly needed for deep inspection?

It's about intelligence and integration. Real time analysis on what is traditionally data at rest (logs) can net actionable data if infrastructure is API-enabled. It's taking the notion of scalability domains to a more dynamic level by not only ensuring scale of services individually to reduce costs but further to improve performance and efficiency by only consuming resources when necessary, instead of all the time. The key is being able to determine when it's necessary and when it isn't.

More reading on infrastructure architecture patterns supporting scalability domains

In a service provider world that's based on subscriber and traffic type. In the enterprise it's more behavioral analysis, it's what someone is trying to do and with what application or data.

But in the end, both environments need to be dynamic with policy enforcement and service invocation based on the unique combination of devices, networks and applications and enabled by the increasing prevalence of API-enabled infrastructure.

SDN is going to propel not just operational networks as a cost savings vehicle, but as part of the technology that ultimately unlocks the software-defined data center. And that includes security.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducte...
Providing the needed data for application development and testing is a huge headache for most organizations. The problems are often the same across companies - speed, quality, cost, and control. Provisioning data can take days or weeks, every time a refresh is required. Using dummy data leads to quality problems. Creating physical copies of large data sets and sending them to distributed teams of developers eats up expensive storage and bandwidth resources. And, all of these copies proliferating...
Malicious agents are moving faster than the speed of business. Even more worrisome, most companies are relying on legacy approaches to security that are no longer capable of meeting current threats. In the modern cloud, threat diversity is rapidly expanding, necessitating more sophisticated security protocols than those used in the past or in desktop environments. Yet companies are falling for cloud security myths that were truths at one time but have evolved out of existence.
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
The time is ripe for high speed resilient software defined storage solutions with unlimited scalability. ISS has been working with the leading open source projects and developed a commercial high performance solution that is able to grow forever without performance limitations. In his session at Cloud Expo, Alex Gorbachev, President of Intelligent Systems Services Inc., shared foundation principles of Ceph architecture, as well as the design to deliver this storage to traditional SAN storage co...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with ...
The Cloud industry has moved from being more than just being able to provide infrastructure and management services on the Cloud. Enter a new era of Cloud computing where monetization’s services through the Cloud are an essential piece of strategy to feed your organizations bottom-line, your revenue and Profitability. In their session at 16th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, discussed how to easily o...
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
In their session at 17th Cloud Expo, Hal Schwartz, CEO of Secure Infrastructure & Services (SIAS), and Chuck Paolillo, CTO of Secure Infrastructure & Services (SIAS), provide a study of cloud adoption trends and the power and flexibility of IBM Power and Pureflex cloud solutions. In his role as CEO of Secure Infrastructure & Services (SIAS), Hal Schwartz provides leadership and direction for the company.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization. In his session at DevOps Summit, Chris Van Tuin, Chief Technologist for the Western US at Red Hat, will discuss: The acceleration of application delivery for the business with DevOps
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is rele...