Click here to close now.


Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, @CloudExpo

Containers Expo Blog: Article

Importance of ‘Proof-of-Concept’ – in Right Sizing the Infrastructure

Importance of undertaking proof of concept (PoC) to examine the viability of an approach

It is quite commonly observed now-a-days as a common practice; most of the companies invest a lot of time in engaging consultants & designers and spending colossal amounts of money in capacity planning to size the infrastructure for their specific needs. Not denying the fact that people, capacity planning tools are always helpful to help identify the required amount of resources to size the infrastructure correctly. However, need to consider the fact and it is absolutely necessary to do "Proof-of-concept" especially while making imperative decisions.

There are always concerns raised in terms of obtaining a satisfactory performance. Moreover, mergers and acquisitions have brought in their share of complexity to the existing environment, resulting in technology -vs- application compatibility related challenges. Nevertheless, this may be applicable for setting up new infrastructure for a business critical application from the scratch or for specific IT requirements (for example - data center consolidation, virtualizing a system or going for cloud based solutions). Proof of concepts helps companies in deciding acceptance criteria, right sizing the infrastructure according to their specific needs. It helps in achieving business objectives by controlling the budget over run and helps IT management to plan for cost and procure resources accordingly to ensure successful completion of a project. As the design phase is responsible for many critical decisions, many cost overrun causes are related to such phase. It is identified that most significant causes of cost overrun related to the design phase are due to blindly following the theoretical evidence or by going with by completely trusting on the metrics obtained using unreliable capacity planning tools.

The purpose of PoC is to showcase the benefits using real world end user scenarios and by calculating the TCO for individual cases. Considering the Key system performance base metrics - Processor, Memory, Disk and Network, usually the work loads are classified in to three types (1) Typical user (2) Power user and (3) Advanced Power user. It is always a good practise to calculate load / system usage based on "Power user". If funds permit, it would be even better to use the upper bound for the calculations by considering "Advanced power user" usage in to the account.

PoC helps in determining and size accordingly based on the Average and Peak loads. It enables the consultants in deciding anticipated future growth and leave sufficient room for all key system performance metrics discussed above.

Gartner predicts that the portion of organizations using cloud services will reach 80% by the end of year 2015. Whilst the Cloud Disaster Recovery Service becoming popular these days, companies want to have quick recovery of vital applications in case of failures, by taking advantage of cloud based DR solutions. Hence, it is becoming imperious for organisations to set their own PoC strategy, choose their own POC clouds, navigate technical hurdles & compatibility related challenges, and measure success.

In conclusion, to successfully execute a project, an organization has to give maximum importance to "Proof-of-concept", which defines its success criteria. The use of a proof-of-concept template can be applied to various projects that can help Businesses bridge the gap between the visionary and delivery stages of production efforts.

Fig Illustrates: Resource equals money

More Stories By Sathyanarayanan Muthukrishnan

Sathyanarayanan Muthukrishnan has worked on and managed a variety of IT projects globally (Canada, Denmark, United Kingdom, India) and interfaces with business leaders in the implementation of systems & enhancements.

  • IT Operations Management.
  • Strategic IT strategic road map planning & execution.
  • Data Center Management.
  • Architecture, Analysis and Planning.
  • Budgeting, Product comparisons: Cost - benefit analysis (Hardware, Software & Applications).
  • Disaster Recovery Planning & Testing.
  • Microsoft Windows & Unix Server farms management.
  • Databases (SQL, Oracle)
  • SAN/NAS storage management - capacity planning.
  • Virtualization & Cloud computing (Certified: Citrix, Vmware, Hyper-V)
  • Networking & IT Security.
  • Process refinement, Issues trend Analysis & solutions, ITIL (Change & Problem management)
  • Best Practices Implementations & Stabilization initiatives.

Latest Stories
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at DevOps Summit, Bryan Cantrill, CTO at Joyent, will demonstrate a third path: containers on multi-tenant bare metal that maximizes performance, security, and networking connectivity.
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. Migration to cloud shifts computing resources from your data center, which can yield significant advantages provided that the cloud vendor an offer enterprise-grade quality for your application.
JFrog has announced a powerful technology for managing software packages from development into production. JFrog Artifactory 4 represents disruptive innovation in its groundbreaking ability to help development and DevOps teams deliver increasingly complex solutions on ever-shorter deadlines across multiple platforms JFrog Artifactory 4 establishes a new category – the Universal Artifact Repository – that reflects JFrog's unique commitment to enable faster software releases through the first pla...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes.
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes ab...
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
As-a-service models offer huge opportunities, but also complicate security. It may seem that the easiest way to migrate to a new architectural model is to let others, experts in their field, do the work. This has given rise to many as-a-service models throughout the industry and across the entire technology stack, from software to infrastructure. While this has unlocked huge opportunities to accelerate the deployment of new capabilities or increase economic efficiencies within an organization, i...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
Chris Van Tuin, Chief Technologist for the Western US at Red Hat, has over 20 years of experience in IT and Software. Since joining Red Hat in 2005, he has been architecting solutions for strategic customers and partners with a focus on emerging technologies including IaaS, PaaS, and DevOps. He started his career at Intel in IT and Managed Hosting followed by leadership roles in services and sales engineering at Loudcloud and Linux startups.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...