Welcome!

Related Topics: @CloudExpo, Java IoT, Mobile IoT, Linux Containers, Open Source Cloud, Containers Expo Blog

@CloudExpo: Article

The Industry Needs to Stop Reacting to Outages

Virtualization, cloud and mobile have all demanded and enabled IT to extend the reach of mission-critical applications

It feels as if we can't even go a week anymore without hearing about a new breach or outage. For years, IT departments were always on stand-by should the unimaginable happen and were judged by how quickly they could curb a bad situation. These days, however, it's not good enough to fix a problem - even if it's taken care of within a few minutes. Questions start to arise the minute something bad happens, and to really show strength IT departments have to stop the problem before it happens. Magic? No, it's just being proactive and it's imperative more than ever that IT closely monitor the health of their infrastructure to keep the business running.

IT departments experience performance and availability issues on a daily basis, often not discovered until end users complain to customer service representatives or help desks. With IT environments becoming more complex, it's become increasingly important to identify where problems are originating in order to avoid downtime or performance-impacting events before they occur. How can IT predict the unimaginable?

First, stop focusing on troubleshooting. Companies like NASDAQ, Facebook, LinkedIn and Yahoo! experienced crippling outages in 2013 that impacted customers and hurt their bottom lines. What did they have in common? They weren't able to detect the problem until it was too late. These companies are surely presented with the resources that allow them to catch the issues before their customers were affected, yet their inability to implement a solution that allows them to fix a problem before it happens cost them time, money, and a hit to their reputation.

Technology issues are the last things one would expect to negatively impact a company's brand, but in reality nothing is more crucial to running a business than its datacenter infrastructure. Infrastructures today are expected to perform better, faster and more consistently than ever before. Couple this with adapting to an exponentially increasing rate of change, and you have a recipe for disaster.

IT organizations are now looking to consolidate data centers to reduce costs and improve efficiencies. Many are turning to virtualization technology to help them get more value out of their existing assets while improving the environmental impact. But virtualization adds an additional layer of complexity, making it difficult to see through the layers of abstraction into the underlying infrastructure. Most companies see a high-level view, but many times they are missing a huge piece of the puzzle that underpins virtualization and application support in the enterprise.

Why is it so important to see that piece of the puzzle? With a full view of their infrastructure, organizations are much more likely to catch performance issues earlier and resolve them more quickly. The differentiating factor here is that when you are continuously viewing your entire infrastructure, you are more able to see trends and matching patterns. When something is off, it stands out quickly because you know what to look for. Similar to the way the security industry evolved around threat detections, the IT industry needs to evolve around infrastructure management.

Three major technology developments - virtualization, cloud and mobile - have all demanded and enabled IT to extend the reach of mission-critical applications, but have limited the enterprise's ability to manage the underlying systems infrastructure. Because of these developments, the IT operations team is constantly chasing problems that are increasingly difficult to find and resolve. Virtualization specifically has demanded a balancing act. Ensure the required performance is available, while driving the highest level of utilization. Otherwise you've overprovisioned and are wasting cycles, money and resources.

Society today expects business applications to be available 24/7, without delay, and the old way of thinking - buy more boxes or throw hardware at the problem - only makes matters worse. What most IT organizations do not realize is that the solution is right there, within their existing infrastructures. It is imperative they realize the importance of regularly monitoring and proactively searching for symptoms that could lead to a new breach or outage.

By using technologies that shine a light into the darkest part of the datacenter and arming users with definitive insight into the performance, health and utilization of the infrastructure, organizations can shift their focus to finding trouble before it starts. Instead of being reactive, we can switch to being a proactive industry that is able to diagnose and resolve issues before they start negatively impacting a business. The result? Greatly improved performance of existing infrastructures that enable IT to align actual workload with requirements and drive the highest levels of performance and availability at the optimal cost.

More Stories By John Gentry

John Gentry has been with Virtual Instruments since early 2009, leading the team that is responsible for bringing Virtual Instruments message and vision to the market. He brings 18 years of experience in Marketing, Product Marketing, Sales and Sales Engineering in the Open Systems and Storage ecosystem. He was a double major in Economics and Sociology at the University of California, Santa Cruz. John first entered the technology field as a marketing intern for Borland International, where he went on to be the first Product Marketing Manager for their Java Compiler. Since then he has held various positions at mid-sized systems integrators and established storage networking companies.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Akana has announced the availability of version 8 of its API Management solution. The Akana Platform provides an end-to-end API Management solution for designing, implementing, securing, managing, monitoring, and publishing APIs. It is available as a SaaS platform, on-premises, and as a hybrid deployment. Version 8 introduces a lot of new functionality, all aimed at offering customers the richest API Management capabilities in a way that is easier than ever for API and app developers to use.
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
Is the ongoing quest for agility in the data center forcing you to evaluate how to be a part of infrastructure automation efforts? As organizations evolve toward bimodal IT operations, they are embracing new service delivery models and leveraging virtualization to increase infrastructure agility. Therefore, the network must evolve in parallel to become equally agile. Read this essential piece of Gartner research for recommendations on achieving greater agility.
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil...
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
With over 720 million Internet users and 40–50% CAGR, the Chinese Cloud Computing market has been booming. When talking about cloud computing, what are the Chinese users of cloud thinking about? What is the most powerful force that can push them to make the buying decision? How to tap into them? In his session at 18th Cloud Expo, Yu Hao, CEO and co-founder of SpeedyCloud, answered these questions and discussed the results of SpeedyCloud’s survey.
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
To leverage Continuous Delivery, enterprises must consider impacts that span functional silos, as well as applications that touch older, slower moving components. Managing the many dependencies can cause slowdowns. See how to achieve continuous delivery in the enterprise.
Actian Corporation has announced the latest version of the Actian Vector in Hadoop (VectorH) database, generally available at the end of July. VectorH is based on the same query engine that powers Actian Vector, which recently doubled the TPC-H benchmark record for non-clustered systems at the 3000GB scale factor (see tpc.org/3323). The ability to easily ingest information from different data sources and rapidly develop queries to make better business decisions is becoming increasingly importan...
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Kubernetes, Docker and containers are changing the world, and how companies are deploying their software and running their infrastructure. With the shift in how applications are built and deployed, new challenges must be solved. In his session at @DevOpsSummit at19th Cloud Expo, Sebastian Scheele, co-founder of Loodse, will discuss the implications of containerized applications/infrastructures and their impact on the enterprise. In a real world example based on Kubernetes, he will show how to ...