|By Greg Akers
|July 27, 2014 02:00 PM EDT
Ransomware is the latest example of the increasingly sophisticated and damaging inventions of hackers. Individuals and organizations of all sizes are finding that their data has been locked down or encrypted until a ransom is paid. One program, CryptoLocker, infected more than 300,000 computers before the FBI and international law enforcement agencies disabled it. A few days later, Cryptowall showed up to take its place. Companies paid $1.3 billion last year in insurance to help offset the costs of combatting data attacks like these.
Other examples include highly customized malware, advanced persistent threats and large-scale Distributed Denial of Service (DDoS) attacks. Security professionals must remain ever vigilant to both known and new threats on the rise. However, with proper visibility into the extended network and robust intelligence, an attack can often be detected and stopped before it causes significant damage. By using the network to gain intelligence, cyber defenders can gain greater visibility of adversary actions and quickly shut them down.
Since an attack can be broken down into stages, it is helpful to think of a response to an attack in stages as well: before, during and after. This is standard operating procedure for anyone in the security profession. Let's examine each stage:
Before: Cyber defenders are constantly on the lookout for areas of vulnerability. Historically, security had been all about defense. Today, teams are developing more intelligent methods of halting intruders. With total visibility into their environments - including, but not limited, to physical and virtual hosts, operating systems, applications, services, protocols, users, content and network behavior -defenders can take action before an attack has even begun.
During the attack, impact can be minimized if security staff understands what is happening and how to stop it as quickly as possible. They need to be able to continuously address threats, not just at a single point in time. Tools including content inspection, behavior anomaly detection, context awareness of users, devices, location information and applications are critical to understanding an attack as it is occurring. Security teams need to discover where, what and how users are connected to applications and resources.
After the attack, cyber defenders must understand the nature of the attack and how to minimize any damage that may have occurred. Advanced forensics and assessment tools help security teams learn from attacks. Where did the attacker come from? How did they find a vulnerability in the network? Could anything have been done to prevent the breach? More important, retrospective security allows for an infrastructure that can continuously gather and analyze data to create security intelligence. Compromises that would have gone undetected for weeks or months can instead be identified, scoped, contained and remediated in real time or close to it.
The two most important aspects of a defensive strategy, then, are understanding and intelligence. Cybersecurity teams are constantly trying to learn more about who their enemies are, why they are attacking and how. This is where the extended network provides unexpected value: delivering a depth of intelligence that cannot be attained anywhere else in the computing environment. Much like in counterterrorism, intelligence is key to stopping attacks before they happen.
Virtual security, as is sometimes the case in real-world warfare, is often disproportionate to available resources. Relatively small adversaries with limited means can inflict disproportionate damage on larger adversaries. In these unbalanced situations, intelligence is one of the most important assets for addressing threats. But intelligence alone is of little benefit without an approach that optimizes the organizational and operational use of intelligence.
Security teams can correlate identity and context, using network analysis techniques that enable the collection of IP network traffic as it enters or exits an interface, and then add to that threat intelligence and analytics capabilities.
This allows security teams to combine what they learn from multiple sources of information to help identify and stop threats. Sources include what they know from the Web, what they know that's happening in the network and a growing amount of collaborative intelligence gleaned from exchange with public and private entities.
Cryptowall will eventually be defeated, but other ransomware programs and as-yet-unknown attacks will rise to threaten critical data. Effective cybersecurity requires an understanding of what assets need to be protected and an alignment of organizational priorities and capabilities. Essentially, a framework of this type enables security staff to think like malicious actors and therefore do a better job of securing their environments. The security team's own threat intelligence practice, uniting commercial threat information with native analysis of user behavior, will detect, defend against and remediate security events more rapidly and effectively than once thought possible.
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
Jul. 27, 2016 06:45 PM EDT Reads: 2,048
I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple…
Jul. 27, 2016 06:45 PM EDT Reads: 1,136
As companies gain momentum, the need to maintain high quality products can outstrip their development team’s bandwidth for QA. Building out a large QA team (whether in-house or outsourced) can slow down development and significantly increases costs. This eBook takes QA profiles from 5 companies who successfully scaled up production without building a large QA team and includes:
What to consider when choosing CI/CD tools
How culture and communication can make or break implementation
Jul. 27, 2016 06:00 PM EDT Reads: 1,668
Actian Corporation has announced the latest version of the Actian Vector in Hadoop (VectorH) database, generally available at the end of July. VectorH is based on the same query engine that powers Actian Vector, which recently doubled the TPC-H benchmark record for non-clustered systems at the 3000GB scale factor (see tpc.org/3323).
The ability to easily ingest information from different data sources and rapidly develop queries to make better business decisions is becoming increasingly importan...
Jul. 27, 2016 05:45 PM EDT Reads: 854
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
Jul. 27, 2016 04:30 PM EDT Reads: 1,850
Big Data, cloud, analytics, contextual information, wearable tech, sensors, mobility, and WebRTC: together, these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems.
In his session at @ThingsExpo, Erik Perotti, Senior Manager of New Ventures on Plantronics’ Innovation team, provided an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it ...
Jul. 27, 2016 04:30 PM EDT Reads: 184
Redis is not only the fastest database, but it is the most popular among the new wave of databases running in containers. Redis speeds up just about every data interaction between your users or operational systems.
In his session at 19th Cloud Expo, Dave Nielsen, Developer Advocate, Redis Labs, will share the functions and data structures used to solve everyday use cases that are driving Redis' popularity.
Jul. 27, 2016 04:30 PM EDT Reads: 1,614
To leverage Continuous Delivery, enterprises must consider impacts that span functional silos, as well as applications that touch older, slower moving components. Managing the many dependencies can cause slowdowns. See how to achieve continuous delivery in the enterprise.
Jul. 27, 2016 04:18 PM EDT Reads: 198
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data.
In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Jul. 27, 2016 04:15 PM EDT Reads: 1,122
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases.
In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Jul. 27, 2016 04:00 PM EDT Reads: 1,735
Is your aging software platform suffering from technical debt while the market changes and demands new solutions at a faster clip?
It’s a bold move, but you might consider walking away from your core platform and starting fresh. ReadyTalk did exactly that.
In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, will discuss why and how ReadyTalk diverted from healthy revenue and over a decade of audio conferencing product development to start an innovati...
Jul. 27, 2016 04:00 PM EDT Reads: 1,052
"Software-defined storage is a big problem in this industry because so many people have different definitions as they see fit to use it," stated Peter McCallum, VP of Datacenter Solutions at FalconStor Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 27, 2016 04:00 PM EDT Reads: 1,517
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector.
In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, discussed the importance of WebRTC and how it enables companies to focus...
Jul. 27, 2016 03:30 PM EDT Reads: 971
StackIQ has announced the release of Stacki 3.2. Stacki is an easy-to-use Linux server provisioning tool. Stacki 3.2 delivers new capabilities that simplify the automation and integration of site-specific requirements. StackIQ is the commercial entity behind this open source bare metal provisioning tool.
Since the release of Stacki in June of 2015, the Stacki core team has been focused on making the Community Edition meet the needs of members of the community, adding features and value, while ...
Jul. 27, 2016 01:45 PM EDT Reads: 438
Deploying applications in hybrid cloud environments is hard work. Your team spends most of the time maintaining your infrastructure, configuring dev/test and production environments, and deploying applications across environments – which can be both time consuming and error prone. But what if you could automate provisioning and deployment to deliver error free environments faster? What could you do with your free time?
Jul. 27, 2016 01:30 PM EDT Reads: 273