Related Topics: @CloudExpo, Java IoT, Mobile IoT, Linux Containers, Open Source Cloud, Containers Expo Blog

@CloudExpo: Article

The Industry Needs to Stop Reacting to Outages

Virtualization, cloud and mobile have all demanded and enabled IT to extend the reach of mission-critical applications

It feels as if we can't even go a week anymore without hearing about a new breach or outage. For years, IT departments were always on stand-by should the unimaginable happen and were judged by how quickly they could curb a bad situation. These days, however, it's not good enough to fix a problem - even if it's taken care of within a few minutes. Questions start to arise the minute something bad happens, and to really show strength IT departments have to stop the problem before it happens. Magic? No, it's just being proactive and it's imperative more than ever that IT closely monitor the health of their infrastructure to keep the business running.

IT departments experience performance and availability issues on a daily basis, often not discovered until end users complain to customer service representatives or help desks. With IT environments becoming more complex, it's become increasingly important to identify where problems are originating in order to avoid downtime or performance-impacting events before they occur. How can IT predict the unimaginable?

First, stop focusing on troubleshooting. Companies like NASDAQ, Facebook, LinkedIn and Yahoo! experienced crippling outages in 2013 that impacted customers and hurt their bottom lines. What did they have in common? They weren't able to detect the problem until it was too late. These companies are surely presented with the resources that allow them to catch the issues before their customers were affected, yet their inability to implement a solution that allows them to fix a problem before it happens cost them time, money, and a hit to their reputation.

Technology issues are the last things one would expect to negatively impact a company's brand, but in reality nothing is more crucial to running a business than its datacenter infrastructure. Infrastructures today are expected to perform better, faster and more consistently than ever before. Couple this with adapting to an exponentially increasing rate of change, and you have a recipe for disaster.

IT organizations are now looking to consolidate data centers to reduce costs and improve efficiencies. Many are turning to virtualization technology to help them get more value out of their existing assets while improving the environmental impact. But virtualization adds an additional layer of complexity, making it difficult to see through the layers of abstraction into the underlying infrastructure. Most companies see a high-level view, but many times they are missing a huge piece of the puzzle that underpins virtualization and application support in the enterprise.

Why is it so important to see that piece of the puzzle? With a full view of their infrastructure, organizations are much more likely to catch performance issues earlier and resolve them more quickly. The differentiating factor here is that when you are continuously viewing your entire infrastructure, you are more able to see trends and matching patterns. When something is off, it stands out quickly because you know what to look for. Similar to the way the security industry evolved around threat detections, the IT industry needs to evolve around infrastructure management.

Three major technology developments - virtualization, cloud and mobile - have all demanded and enabled IT to extend the reach of mission-critical applications, but have limited the enterprise's ability to manage the underlying systems infrastructure. Because of these developments, the IT operations team is constantly chasing problems that are increasingly difficult to find and resolve. Virtualization specifically has demanded a balancing act. Ensure the required performance is available, while driving the highest level of utilization. Otherwise you've overprovisioned and are wasting cycles, money and resources.

Society today expects business applications to be available 24/7, without delay, and the old way of thinking - buy more boxes or throw hardware at the problem - only makes matters worse. What most IT organizations do not realize is that the solution is right there, within their existing infrastructures. It is imperative they realize the importance of regularly monitoring and proactively searching for symptoms that could lead to a new breach or outage.

By using technologies that shine a light into the darkest part of the datacenter and arming users with definitive insight into the performance, health and utilization of the infrastructure, organizations can shift their focus to finding trouble before it starts. Instead of being reactive, we can switch to being a proactive industry that is able to diagnose and resolve issues before they start negatively impacting a business. The result? Greatly improved performance of existing infrastructures that enable IT to align actual workload with requirements and drive the highest levels of performance and availability at the optimal cost.

More Stories By John Gentry

John Gentry has been with Virtual Instruments since early 2009, leading the team that is responsible for bringing Virtual Instruments message and vision to the market. He brings 18 years of experience in Marketing, Product Marketing, Sales and Sales Engineering in the Open Systems and Storage ecosystem. He was a double major in Economics and Sociology at the University of California, Santa Cruz. John first entered the technology field as a marketing intern for Borland International, where he went on to be the first Product Marketing Manager for their Java Compiler. Since then he has held various positions at mid-sized systems integrators and established storage networking companies.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Latest Stories
The IoT industry is now at a crossroads, between the fast-paced innovation of technologies and the pending mass adoption by global enterprises. The complexity of combining rapidly evolving technologies and the need to establish practices for market acceleration pose a strong challenge to global enterprises as well as IoT vendors. In his session at @ThingsExpo, Clark Smith, senior product manager for Numerex, will discuss how Numerex, as an experienced, established IoT provider, has embraced a ...
DevOps theory promotes a culture of continuous improvement built on collaboration, empowerment, systems thinking, and feedback loops. But how do you collaborate effectively across the traditional silos? How can you make decisions without system-wide visibility? How can you see the whole system when it is spread across teams and locations? How do you close feedback loops across teams and activities delivering complex multi-tier, cloud, container, serverless, and/or API-based services?
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.
In the 21st century, security on the Internet has become one of the most important issues. We hear more and more about cyber-attacks on the websites of large corporations, banks and even small businesses. When online we’re concerned not only for our own safety but also our privacy. We have to know that hackers usually start their preparation by investigating the private information of admins – the habits, interests, visited websites and so on. On the other hand, our own security is in danger bec...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
Enterprises have been using both Big Data and virtualization for years. Until recently, however, most enterprises have not combined the two. Big Data's demands for higher levels of performance, the ability to control quality-of-service (QoS), and the ability to adhere to SLAs have kept it on bare metal, apart from the modern data center cloud. With recent technology innovations, we've seen the advantages of bare metal erode to such a degree that the enhanced flexibility and reduced costs that cl...
Donna Yasay, President of HomeGrid Forum, today discussed with a panel of technology peers how certification programs are at the forefront of interoperability, and the answer for vendors looking to keep up with today's growing industry for smart home innovation. "To ensure multi-vendor interoperability, accredited industry certification programs should be used for every product to provide credibility and quality assurance for retail and carrier based customers looking to add ever increasing num...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will show how customers are able to achieve a level of transparency that enables everyon...
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
“Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. CloudBerry Backup is a leading cross-platform cloud backup and disaster recovery solution integrated with major public cloud services, such as Amazon Web Services, Microsoft Azure and Google Cloud Platform.
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
The security needs of IoT environments require a strong, proven approach to maintain security, trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vic...
Regulatory requirements exist to promote the controlled sharing of information, while protecting the privacy and/or security of the information. Regulations for each type of information have their own set of rules, policies, and guidelines. Cloud Service Providers (CSP) are faced with increasing demand for services at decreasing prices. Demonstrating and maintaining compliance with regulations is a nontrivial task and doing so against numerous sets of regulatory requirements can be daunting task...
What are the successful IoT innovations from emerging markets? What are the unique challenges and opportunities from these markets? How did the constraints in connectivity among others lead to groundbreaking insights? In her session at @ThingsExpo, Carmen Feliciano, a Principal at AMDG, will answer all these questions and share how you can apply IoT best practices and frameworks from the emerging markets to your own business.