Click here to close now.

Welcome!

Related Topics: Containers Expo, Java IoT, @MicroservicesE Blog, @CloudExpo

Containers Expo: Article

Importance of ‘Proof-of-Concept’ – in Right Sizing the Infrastructure

Importance of undertaking proof of concept (PoC) to examine the viability of an approach

It is quite commonly observed now-a-days as a common practice; most of the companies invest a lot of time in engaging consultants & designers and spending colossal amounts of money in capacity planning to size the infrastructure for their specific needs. Not denying the fact that people, capacity planning tools are always helpful to help identify the required amount of resources to size the infrastructure correctly. However, need to consider the fact and it is absolutely necessary to do "Proof-of-concept" especially while making imperative decisions.

There are always concerns raised in terms of obtaining a satisfactory performance. Moreover, mergers and acquisitions have brought in their share of complexity to the existing environment, resulting in technology -vs- application compatibility related challenges. Nevertheless, this may be applicable for setting up new infrastructure for a business critical application from the scratch or for specific IT requirements (for example - data center consolidation, virtualizing a system or going for cloud based solutions). Proof of concepts helps companies in deciding acceptance criteria, right sizing the infrastructure according to their specific needs. It helps in achieving business objectives by controlling the budget over run and helps IT management to plan for cost and procure resources accordingly to ensure successful completion of a project. As the design phase is responsible for many critical decisions, many cost overrun causes are related to such phase. It is identified that most significant causes of cost overrun related to the design phase are due to blindly following the theoretical evidence or by going with by completely trusting on the metrics obtained using unreliable capacity planning tools.

The purpose of PoC is to showcase the benefits using real world end user scenarios and by calculating the TCO for individual cases. Considering the Key system performance base metrics - Processor, Memory, Disk and Network, usually the work loads are classified in to three types (1) Typical user (2) Power user and (3) Advanced Power user. It is always a good practise to calculate load / system usage based on "Power user". If funds permit, it would be even better to use the upper bound for the calculations by considering "Advanced power user" usage in to the account.

PoC helps in determining and size accordingly based on the Average and Peak loads. It enables the consultants in deciding anticipated future growth and leave sufficient room for all key system performance metrics discussed above.

Gartner predicts that the portion of organizations using cloud services will reach 80% by the end of year 2015. Whilst the Cloud Disaster Recovery Service becoming popular these days, companies want to have quick recovery of vital applications in case of failures, by taking advantage of cloud based DR solutions. Hence, it is becoming imperious for organisations to set their own PoC strategy, choose their own POC clouds, navigate technical hurdles & compatibility related challenges, and measure success.

In conclusion, to successfully execute a project, an organization has to give maximum importance to "Proof-of-concept", which defines its success criteria. The use of a proof-of-concept template can be applied to various projects that can help Businesses bridge the gap between the visionary and delivery stages of production efforts.

Fig Illustrates: Resource equals money

More Stories By Sathyanarayanan Muthukrishnan

Sathyanarayanan Muthukrishnan has worked on and managed a variety of IT projects globally (Canada, Denmark, United Kingdom, India) and interfaces with business leaders in the implementation of systems & enhancements.

  • IT Operations Management.
  • Strategic IT strategic road map planning & execution.
  • Data Center Management.
  • Architecture, Analysis and Planning.
  • Budgeting, Product comparisons: Cost - benefit analysis (Hardware, Software & Applications).
  • Disaster Recovery Planning & Testing.
  • Microsoft Windows & Unix Server farms management.
  • Databases (SQL, Oracle)
  • SAN/NAS storage management - capacity planning.
  • Virtualization & Cloud computing (Certified: Citrix, Vmware, Hyper-V)
  • Networking & IT Security.
  • Process refinement, Issues trend Analysis & solutions, ITIL (Change & Problem management)
  • Best Practices Implementations & Stabilization initiatives.

Latest Stories
SYS-CON Events announced today that Secure Infrastructure & Services will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Secure Infrastructure & Services (SIAS) is a managed services provider of cloud computing solutions for the IBM Power Systems market. The company helps mid-market firms built on IBM hardware platforms to deploy new levels of reliable and cost-effective computing and hig...
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world...
In the midst of the widespread popularity and adoption of cloud computing, it seems like everything is being offered “as a Service” these days: Infrastructure? Check. Platform? You bet. Software? Absolutely. Toaster? It’s only a matter of time. With service providers positioning vastly differing offerings under a generic “cloud” umbrella, it’s all too easy to get confused about what’s actually being offered. In his session at 16th Cloud Expo, Kevin Hazard, Director of Digital Content for SoftL...
"What Dyn is able to do with our Internet performance and our Internet intelligence is give companies visibility into what is actually going on in that cloud," noted Corey Hamilton, Product Marketing Manager at Dyn, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, will explore the IoT cloud-based platform technologies drivi...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes. In his session at DevOps Summit, Michael Demmer, VP of Engineering at Jut, will discuss how this can...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult – let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and liv...
Live Webinar with 451 Research Analyst Peter Christy. Join us on Wednesday July 22, 2015, at 10 am PT / 1 pm ET In a world where users are on the Internet and the applications are in the cloud, how do you maintain your historic SLA with your users? Peter Christy, Research Director, Networks at 451 Research, will discuss this new network paradigm, one in which there is no LAN and no WAN, and discuss what users and network administrators gain and give up when migrating to the agile world of clo...
Agile, which started in the development organization, has gradually expanded into other areas downstream - namely IT and Operations. Teams – then teams of teams – have streamlined processes, improved feedback loops and driven a much faster pace into IT departments which have had profound effects on the entire organization. In his session at DevOps Summit, Anders Wallgren, Chief Technology Officer of Electric Cloud, will discuss how DevOps and Continuous Delivery have emerged to help connect dev...
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of pro...
"A lot of the enterprises that have been using our systems for many years are reaching out to the cloud - the public cloud, the private cloud and hybrid," stated Reuven Harrison, CTO and Co-Founder of Tufin, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.