Welcome!

Related Topics: @BigDataExpo, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, SDN Journal

@BigDataExpo: Article

Consolidating Big Data

How to make your data center more cost-effective while improving performance

Cloud computing has opened the doors to a vast array of online services. With the emergence of new cloud technologies, both public and private companies are seeing increases in performance gains, elasticity and convenience. However, maintaining a competitive advantage has become increasingly difficult. Service providers are taking a closer look at their data storage infrastructure for ways to improve performance and cut costs.

If the status quo remains, maintaining low-cost cloud services will become increasingly difficult. Service providers will incur higher costs, while consumers become burdened with storage capacity restrictions. Such obstacles are influencing service providers to find new ways to scale cost-effectively and increase performance in the data center.

Cost-Benefit Analysis
In response to the increase of online account activity, service providers are consolidating their data centers to a centralized environment. By doing so, they are able to cut costs while increasing efficiency, allowing data to be accessible from any location. Centralizing equipment enables providers the ability to deliver enhanced Internet connections, performance and reliability.

However, with these added benefits also come disadvantages. For instance, scalability becomes more expensive and difficult to achieve. Improving efficiency within a centralized data center requires the purchase of additional high-performance, specialized equipment, which increases costs and energy consumption, challenging endeavors to control at scale. In an economy where cost-cutting is becoming a necessity for large and small enterprises alike, these added expenses are unacceptable.

Characteristics of the Cloud
Solving performance problems, like data bottlenecks, is a growing concern for cloud providers who must oversee significantly more users and accompanying performance demands, than do enterprises. Although the average user of an enterprise system requires elevated performance, these systems generally manage fewer users who are able to access their data directly through the network. Moreover, enterprise system users are accessing, saving and sending comparatively relatively small files that require less storage capacity and performance.

Outside the internal enterprise network, however, it's a different story. Cloud systems are simultaneously being accessed by a multitude of users across the Internet, which itself becomes a performance bottleneck. The average cloud user stores relatively larger files than the average enterprise user placing greater strains on data center resources. The cloud provider's storage system not only has to scale to each user, but must also sustain performance across all users as well.

Best Practices
In response to growing storage demands, cloud providers are faced with profound business implications. Service providers need to scale quickly in order to meet the booming demand for more data storage. The following best practices can help optimize data center ROI in a period of significant IT cutbacks:

  • Opt for commodity components when possible: Low-energy hardware makes good business sense. Commodity hardware is not only cost-effective, but also energy-efficient, which significantly reduces both setup and operating costs in one move.
  • Seek out a distributed storage system: Distributed storage presents the best way to build at scale even though the data center trend has been moving toward centralization. Increased performance at the software level counterbalances the performance advantage of a centralized data storage approach.
  • Avoid bottlenecks: A single point of entry can easily lead to a performance bottleneck. Adding caches to relieve the bottleneck, as most data center infrastructures currently do, quickly adds cost and complexity to a system. On the other hand, a horizontally scalable system that distributes data among all nodes delivers a high level of redundancy.

Moving Forward
Currently, Big Data storage consists mainly of high performance, vertically scaled storage systems. Since these infrastructures can only scale to a single petabyte and are costly, they are not a sustainable solution. Moving to a horizontally scaled data storage model that distributes data evenly onto energy-efficient hardware can reduce costs and increase performance in the cloud. With these insights, cloud service providers can take the necessary steps to improve the efficiency, scalability and performance of their data storage centers.

More Stories By Stefan Bernbo

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, he has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
Though cloud is the future of enterprise computing, a smooth transition of legacy applications and systems is critical for seamless business operations. IT professionals are eager to start leveraging the cost, scale and other benefits of cloud, but with massive investments already in place in existing infrastructure and a number of compliance and resource hurdles, it can be challenging to move to a cloud-based infrastructure.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's applicati...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant th...
SYS-CON Events announced today that SkyScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU hardware platforms for lease to customers desiring the fastest performance available as a service anywhere in the world. SkyScale builds, configures, and manages dedicated systems strategically located in maximum-security...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, will go over the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, applicatio...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
Microsoft Azure Container Services can be used for container deployment in a variety of ways including support for Orchestrators like Kubernetes, Docker Swarm and Mesos. However, the abstraction for app development that support application self-healing, scaling and so on may not be at the right level. Helm and Draft makes this a lot easier. In this primarily demo-driven session at @DevOpsSummit at 21st Cloud Expo, Raghavan "Rags" Srinivas, a Cloud Solutions Architect/Evangelist at Microsoft, wi...
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with lega...