Welcome!

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Feed Post

Data Center in a Box

CloudSwitch Offers a Perfect Solution to the “Data Center in a Box” Problem

Years ago I had the privilege of helping to grow Bladelogic from early-stage startup to a profitable organization of over 300 people.  In the early days one of my first challenges was figuring out how to show our product to prospective customers effectively.  I needed to show our ability to manage a large IT infrastructure but I had to do so without actually dragging a data center to each of our sales calls.  (My first attempt involved renting a fleet of trucks but visitor parking turned out to be a real challenge.)  As I look back on that situation now, I realize that CloudSwitch offers a perfect solution to this “data center in a box” problem.  In this article I’ll walk through the use case and describe a new CloudSwitch feature, Sample VMs, which makes this possible.

The first step toward a virtual data center is to use virtualization, of course. In late 2001 VMware released the third major version of their Workstation product.  Given my demonstration requirement, I bought a copy of Workstation, found the biggest “mainstream” laptop available at the time, filled it with memory, and deployed as many VMs as it would run without completely falling over.  Depending on the end user’s patience, that number was somewhere between four and six.  While not exactly a world-class data center, the end result served us well for demonstration purposes.  It was, however, limited in capacity, slow, expensive, and difficult to maintain.

In retrospect, what we really needed was a way to:

  1. Quickly start new servers and turn them off when finished;
  2. Use existing, internal virtual servers or public server images; and
  3. Connect to these servers as if they were on the local network.

Fast-forward nearly ten years and the first of these points—utility capacity on demand—is all but ubiquitous courtesy of providers like Amazon and Terremark.  We of course know this as “the cloud” and companies use it every day for a variety of reasons.  The second two points are more interesting.

Today’s cloud providers have implemented their platforms on a particular virtualization solution—and in many cases they’ve customized these solutions to suit the needs of their product offering.  This is of course perfectly natural, however one practical effect is that end users cannot simply take their own virtual machines and expect to run them within a given cloud provider’s environment.  The reasons vary—different virtualization solution, different underlying hardware, different capabilities—but the end result is always the same: cloud providers will not allow end users to upload custom VMs and run them.  For this, CloudSwitch is needed.

One of CloudSwitch’s fundamental benefits is the ability to run customers’ virtual servers in whichever cloud provider is most appropriate, regardless of the underlying implementation details.  After deploying our appliance, users can select virtual servers within their internal VMware environment and migrate them to a public cloud provider such as Amazon or Terremark without being forced to modify those servers in any way.  No additional software or configuration change is required for this to work.  Users literally “point and click” to migrate virtual servers from their data center into a cloud provider.

In many cases, users want to leverage the cloud but don’t want to migrate existing servers.  CloudSwitch supports this approach as well.  With the recent GA release, CloudSwitch allows customers to select from a set of public “Sample VMs” for access to cloud capacity.  Customers can use these sample VMs for a variety of purposes—evaluation, production, or anything in between. Further, since these machines have already been moved into the cloud, starting them is quick and efficient.  Current Sample VMs include a stock Centos 5.4 base image, SugarCRM, and BugZilla running on a Windows OS. We’re expanding the list of Sample VMs based on a range of customer use cases, and have plans to include many open source and partner products.

The final point—seamless connectivity—speaks to the way cloud providers offer connectivity to their instances.  Today, each provider has chosen a particular network architecture for delivery of their services.  For example, if you start a Linux instance in Amazon’s EC2 service and run “ifconfig eth0” you will likely see a 10.x.x.x IP address assigned to the interface.  This is because Amazon has chosen the 10.0.0.0/8 private address space for connectivity to customer instances.  Other cloud providers use different addressing schemes but regardless these are different and disconnected from what customers are using within their own data centers.  Further, secure connectivity to these instances is not convenient and in many cases is not possible.  CloudSwitch addresses this problem as well.

As part of the deployment process, CloudSwitch automatically creates a secure overlay network within the chosen cloud provider’s environment.  This overlay network extends a customer’s internal data center into the cloud so the cloud-based servers are part of the customer’s data center network.  When migrating existing servers into the cloud, end users see no difference; they can SSH or RDP to migrated instances without even realizing that their servers are no longer running within the data center.

So, CloudSwitch offers a way to leverage the power of the public cloud without forcing end users to change the way their infrastructure is configured.  We also offer a set of sample content customers can use if they simply want to establish a footprint in the cloud without migrating existing servers.  Finally, end users connect to cloud servers just as if they were running within the data center network.  The implication for my “data center in a box” use case is probably obvious: I could have installed the CloudSwitch Appliance on my sales engineers’ laptops, created a set of demo servers in the public cloud, and used these for field sales activity.  We would have saved money on the laptops but more importantly my team would have been more effective.

Ultimately the cloud is about better service delivery.  Better can certainly mean less expensive but in my case better would have meant more effectively expressing the value of our product to prospective customers.  Regardless of the definition, CloudSwitch offers a simple, secure, and effective way to leverage the cloud.  Since the early startup days in 2001 my goal hasn’t really changed much; I still want the opportunity to show you how our product can make you more effective.  The difference is I finally have my “data center in a box” to prove it to you (and I don’t have to take up all of your visitor parking spots).

Read the original blog entry...

More Stories By Ellen Rubin

Ellen Rubin is the CEO and co-founder of ClearSky Data, an enterprise storage company that recently raised $27 million in a Series B investment round. She is an experienced entrepreneur with a record in leading strategy, market positioning and go-to- market efforts for fast-growing companies. Most recently, she was co-founder of CloudSwitch, a cloud enablement software company, acquired by Verizon in 2011. Prior to founding CloudSwitch, Ellen was the vice president of marketing at Netezza, where as a member of the early management team, she helped grow the company to more than $130 million in revenues and a successful IPO in 2007. Ellen holds an MBA from Harvard Business School and an undergraduate degree magna cum laude from Harvard University.

Latest Stories
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists l...
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Virtualization over the past years has become a key strategy for IT to acquire multi-tenancy, increase utilization, develop elasticity and improve security. And virtual machines (VMs) are quickly becoming a main vehicle for developing and deploying applications. The introduction of containers seems to be bringing another and perhaps overlapped solution for achieving the same above-mentioned benefits. Are a container and a virtual machine fundamentally the same or different? And how? Is one techn...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
MongoDB Atlas leverages VPC peering for AWS, a service that allows multiple VPC networks to interact. This includes VPCs that belong to other AWS account holders. By performing cross account VPC peering, users ensure networks that host and communicate their data are secure. In his session at 20th Cloud Expo, Jay Gordon, a Developer Advocate at MongoDB, will explain how to properly architect your VPC using existing AWS tools and then peer with your MongoDB Atlas cluster. He'll discuss the secur...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service.
Niagara Networks exhibited at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. Niagara Networks offers the highest port-density systems, and the most complete Next-Generation Network Visibility systems including Network Packet Brokers, Bypass Switches, and Network TAPs.
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, will discuss the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information,
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
In his session at Cloud Expo, Alan Winters, an entertainment executive/TV producer turned serial entrepreneur, will present a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to max...
Interoute has announced the integration of its Global Cloud Infrastructure platform with Rancher Labs’ container management platform, Rancher. This approach enables enterprises to accelerate their digital transformation and infrastructure investments. Matthew Finnie, Interoute CTO commented “Enterprises developing and building apps in the cloud and those on a path to Digital Transformation need Digital ICT Infrastructure that allows them to build, test and deploy faster than ever before. The int...
China Unicom exhibit at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. China United Network Communications Group Co. Ltd ("China Unicom") was officially established in 2009 on the basis of the merger of former China Netcom and former China Unicom. China Unicom mainly operates a full range of telecommunications services including mobile broadband (GSM, WCDMA, LTE FDD, TD-LTE), fixed-line broadband, ICT, data communica...