Welcome!

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Feed Post

Data Center in a Box

CloudSwitch Offers a Perfect Solution to the “Data Center in a Box” Problem

Years ago I had the privilege of helping to grow Bladelogic from early-stage startup to a profitable organization of over 300 people.  In the early days one of my first challenges was figuring out how to show our product to prospective customers effectively.  I needed to show our ability to manage a large IT infrastructure but I had to do so without actually dragging a data center to each of our sales calls.  (My first attempt involved renting a fleet of trucks but visitor parking turned out to be a real challenge.)  As I look back on that situation now, I realize that CloudSwitch offers a perfect solution to this “data center in a box” problem.  In this article I’ll walk through the use case and describe a new CloudSwitch feature, Sample VMs, which makes this possible.

The first step toward a virtual data center is to use virtualization, of course. In late 2001 VMware released the third major version of their Workstation product.  Given my demonstration requirement, I bought a copy of Workstation, found the biggest “mainstream” laptop available at the time, filled it with memory, and deployed as many VMs as it would run without completely falling over.  Depending on the end user’s patience, that number was somewhere between four and six.  While not exactly a world-class data center, the end result served us well for demonstration purposes.  It was, however, limited in capacity, slow, expensive, and difficult to maintain.

In retrospect, what we really needed was a way to:

  1. Quickly start new servers and turn them off when finished;
  2. Use existing, internal virtual servers or public server images; and
  3. Connect to these servers as if they were on the local network.

Fast-forward nearly ten years and the first of these points—utility capacity on demand—is all but ubiquitous courtesy of providers like Amazon and Terremark.  We of course know this as “the cloud” and companies use it every day for a variety of reasons.  The second two points are more interesting.

Today’s cloud providers have implemented their platforms on a particular virtualization solution—and in many cases they’ve customized these solutions to suit the needs of their product offering.  This is of course perfectly natural, however one practical effect is that end users cannot simply take their own virtual machines and expect to run them within a given cloud provider’s environment.  The reasons vary—different virtualization solution, different underlying hardware, different capabilities—but the end result is always the same: cloud providers will not allow end users to upload custom VMs and run them.  For this, CloudSwitch is needed.

One of CloudSwitch’s fundamental benefits is the ability to run customers’ virtual servers in whichever cloud provider is most appropriate, regardless of the underlying implementation details.  After deploying our appliance, users can select virtual servers within their internal VMware environment and migrate them to a public cloud provider such as Amazon or Terremark without being forced to modify those servers in any way.  No additional software or configuration change is required for this to work.  Users literally “point and click” to migrate virtual servers from their data center into a cloud provider.

In many cases, users want to leverage the cloud but don’t want to migrate existing servers.  CloudSwitch supports this approach as well.  With the recent GA release, CloudSwitch allows customers to select from a set of public “Sample VMs” for access to cloud capacity.  Customers can use these sample VMs for a variety of purposes—evaluation, production, or anything in between. Further, since these machines have already been moved into the cloud, starting them is quick and efficient.  Current Sample VMs include a stock Centos 5.4 base image, SugarCRM, and BugZilla running on a Windows OS. We’re expanding the list of Sample VMs based on a range of customer use cases, and have plans to include many open source and partner products.

The final point—seamless connectivity—speaks to the way cloud providers offer connectivity to their instances.  Today, each provider has chosen a particular network architecture for delivery of their services.  For example, if you start a Linux instance in Amazon’s EC2 service and run “ifconfig eth0” you will likely see a 10.x.x.x IP address assigned to the interface.  This is because Amazon has chosen the 10.0.0.0/8 private address space for connectivity to customer instances.  Other cloud providers use different addressing schemes but regardless these are different and disconnected from what customers are using within their own data centers.  Further, secure connectivity to these instances is not convenient and in many cases is not possible.  CloudSwitch addresses this problem as well.

As part of the deployment process, CloudSwitch automatically creates a secure overlay network within the chosen cloud provider’s environment.  This overlay network extends a customer’s internal data center into the cloud so the cloud-based servers are part of the customer’s data center network.  When migrating existing servers into the cloud, end users see no difference; they can SSH or RDP to migrated instances without even realizing that their servers are no longer running within the data center.

So, CloudSwitch offers a way to leverage the power of the public cloud without forcing end users to change the way their infrastructure is configured.  We also offer a set of sample content customers can use if they simply want to establish a footprint in the cloud without migrating existing servers.  Finally, end users connect to cloud servers just as if they were running within the data center network.  The implication for my “data center in a box” use case is probably obvious: I could have installed the CloudSwitch Appliance on my sales engineers’ laptops, created a set of demo servers in the public cloud, and used these for field sales activity.  We would have saved money on the laptops but more importantly my team would have been more effective.

Ultimately the cloud is about better service delivery.  Better can certainly mean less expensive but in my case better would have meant more effectively expressing the value of our product to prospective customers.  Regardless of the definition, CloudSwitch offers a simple, secure, and effective way to leverage the cloud.  Since the early startup days in 2001 my goal hasn’t really changed much; I still want the opportunity to show you how our product can make you more effective.  The difference is I finally have my “data center in a box” to prove it to you (and I don’t have to take up all of your visitor parking spots).

Read the original blog entry...

More Stories By Ellen Rubin

Ellen Rubin is the CEO and co-founder of ClearSky Data, an enterprise storage company that recently raised $27 million in a Series B investment round. She is an experienced entrepreneur with a record in leading strategy, market positioning and go-to- market efforts for fast-growing companies. Most recently, she was co-founder of CloudSwitch, a cloud enablement software company, acquired by Verizon in 2011. Prior to founding CloudSwitch, Ellen was the vice president of marketing at Netezza, where as a member of the early management team, she helped grow the company to more than $130 million in revenues and a successful IPO in 2007. Ellen holds an MBA from Harvard Business School and an undergraduate degree magna cum laude from Harvard University.

Latest Stories
“DevOps is really about the business. The business is under pressure today, competitively in the marketplace to respond to the expectations of the customer. The business is driving IT and the problem is that IT isn't responding fast enough," explained Mark Levy, Senior Product Marketing Manager at Serena Software, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is rele...
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions with...
"What is the next step in the evolution of IoT systems? The answer is data, information, which is a radical shift from assets, from things to input for decision making," stated Michael Minkevich, VP of Technology Services at Luxoft, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC sits at the intersection between VoIP and the Web. As such, it poses some interesting challenges for those developing services on top of it, but also for those who need to test and monitor these services. In his session at WebRTC Summit, Tsahi Levent-Levi, co-founder of testRTC, reviewed the various challenges posed by WebRTC when it comes to testing and monitoring and on ways to overcome them.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Providing the needed data for application development and testing is a huge headache for most organizations. The problems are often the same across companies - speed, quality, cost, and control. Provisioning data can take days or weeks, every time a refresh is required. Using dummy data leads to quality problems. Creating physical copies of large data sets and sending them to distributed teams of developers eats up expensive storage and bandwidth resources. And, all of these copies proliferating...
Every successful software product evolves from an idea to an enterprise system. Notably, the same way is passed by the product owner's company. In his session at 20th Cloud Expo, Oleg Lola, CEO of MobiDev, will provide a generalized overview of the evolution of a software product, the product owner, the needs that arise at various stages of this process, and the value brought by a software development partner to the product owner as a response to these needs.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, discussed the best practices that will ensure a successful smart city journey.
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.