|By Dana Gardner||
|August 24, 2012 06:00 AM EDT||
The latest BriefingsDirect end-user case-study uncovers how outerwear and sportswear maker and distributor Columbia Sportswear has used virtualization techniques and benefits to significantly improve their business operations.
We’ll see how Columbia Sportswear’s use of deep virtualization assisted in rationalizing its platforms and data center, as well as led to benefits in their enterprise resource planning (ERP) implementation. We’ll also learn how virtualizing mission-critical applications formed a foundation for improved disaster recovery (DR) best practices.
Stay with us now to learn more about how better systems make for better applications that deliver better business results with Michael Leeper, Senior Manager of IT Engineering at Columbia Sportswear, and Suzan Frye, Manager of Systems Engineering at Columbia Sportswear, in Portland, Oregon. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: Tell me a little bit about how you got into virtualization. What were some of the requirements that you needed to fulfill at the data center level?
Leeper: Pre-2009, we'd experimented with virtualization. It'd be one of those things that I had my teams working on, mostly so we could tell my boss that we were doing it, but there wasn’t a significant focus on it. It was a nice toy to play with in the corner and it helped us in some small areas, but there were no big wins there.
Columbia Sportswear is the worldwide leader in apparel and accessories. We sell primarily outerwear and sportswear products, and a little bit of footwear, globally. We have about 4,000 employees, 50 some-odd physical locations, not counting retail, around the world. The products are primarily manufactured in Asia with sales distribution happening in both Europe and United States.
My teams out of the U.S. manage our global footprint, and we are the sole source of IT support globally from here.
In mid-2009, the board of directors at Columbia decided that we, as a company, needed a much stronger DR plan. That included the construction of a new data center for us to house our production environments offsite.
As we were working through the requirements of that project with my teams, it became pretty clear for us that virtualization was the way we were going to make that happen. For various reasons, we set off on this path of virtualization for our primary data center, as we were working through issues surrounding multiple data centers and DR processes.
Our technologies weren't based on the physical world any more. We were finding more issues in physical than we were in virtual. So we started down this path to virtualize our entire production world. By that point, mid-2010 had come around, and we were ready to go. We had built our DR stack that virtualized our primary data centers taking us to the 80 percent to 90 percent virtual machine (VM) rate.
We were extremely successful in that process. We were able to move our primary data center over a couple of weekends with very little downtime to the end users, and that was all built on VMware technology.
About a week after we had finished that project, I got a call from our CIO, who said he had purchased a new ERP system, and Columbia was going to start down the path of a fully new ERP implementation.
I was being asked at that time what platform we should run it on, and we had a clean slate to look everywhere we could to find what our favorite, what we felt was the most safe and stable platform to run the crown jewels of the company which is ERP. For us that was going to be the SAP stack.
So it wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway. That’s what we were good at, and that’s where teams were staffed and skilled at. What we did was design the platform that we felt was going to meet our corporate standards and really meet our goals. For us that was running ERP on VMware.
Gardner: It sounds as if you had a good rationale for moving into a highly virtualized environment, but that then it made it easier for you to do other things.
Leeper: There are a couple of things there. Specifically in the migration to virtualization, we knew we were going to have to go through the effort of moving operating systems from one site to another. We determined that we could do that once on the physical side, relatively easily, and probably the same amount of effort as doing it once by converting physical to virtual.
The problem was that the next time we wanted to move services back from one facility to another in the physical world, we're going to have to do that work again. In the virtual space, we never had to do it again.
To make the teams go through the effort of virtualizing a server to then move it to another data center, we all need to do is do the work once. For my engineers, any time we get them to do the mundane stuff once it's better than doing it multiple times. So we got that effort taken care of in that early phase of the project to virtualize our environments.
For the ERP platform specifically, this was a net new implementation. We were converting from a JD Edwards environment running on IBM big iron to a brand-new SAP stack. We didn’t have anything to migrate. This was really built from scratch.
So we didn’t have to worry about a lot of the legacy configurations or legacy environments that may have been there for us. We got to build it new. And by that point in our journey, virtualized was the only way for us to do it. That’s what we do, it’s how we do it, and that's what we’re good at.
Across the board
Gardner: I saw some statistics that you went from 25 percent to 75 percent virtualization in about eight months, which is really impressive. How did you get the pace and what was important in keeping that pace going?
Frye: The only way we could do it was with virtualization, and using the efficiencies we gained. We centrally manage all of IT and engineering globally out of our headquarters in Portland. When we were given the initial project to move our data center and not only move our data center but provide DR services as well, it was a really easy sell to the business.
We could go to the business and explain to them the benefits of virtualization and what it would mean for their application. They wouldn’t have to rebuild and they wouldn’t have to bring in the vendor or any consultants. We can just take their systems, virtualize them, move them to our new data center, and then provide that automatic DR with Site Recovery Manager (SRM).
We had nine months to move our data center and we basically were all hands on deck, everybody on the server engineering team, storage, and networking teams as well. And we had executive support and sponsorship. It was very easy for us to go to the business market virtualization to the business and start down that path where we were socializing the idea. A lot of people, of course, were dragging their feet a little bit. We all know that story.
But once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us. We went from that 20 percent to 30 percent virtualization. We had about 75 percent when we were in the middle of our DR project, and today we’re actually at around 93 percent.
I think it surprises people that we have a "virtualize first" strategy today. Now it’s assumed that your system will be virtual and then all the benefits, the flexibility, the portability, the optimization, and the efficiencies that come with it.
But like most companies, we had to start with some of our lower tier or lower service-level agreement (SLA) systems, our development systems, and start working with the business on getting them to understand some of the benefits that they could gain by working with virtual systems.
Performance is there
Again people are always surprised. Will you have SQL virtualized? Do you have SAP virtualized? And the answer is yes, today we do, and the performance is there, the optimization is there, and that flexibility is there.
If you’re just starting out today, my advice would be to go ahead and start small. Give the business what they want, do it right, and give it the resources it needs to have. Don’t under-promise, over-deliver, and let the business start seeing the efficiencies that they can realize, and some of those hidden efficiencies as well.
We can support DR testing. We can support almost instant data refreshes, cloning, and snapping, so their upgrades are more seamless, and they have an easier back-out plan.
From an engineering and development perspective, we're giving them technologies that they could only dream of four or five years ago. And it’s really benefited the business in that we’re auto-provisioning. We’re provisioning in minutes versus days. We’re granting resources when needed.
It’s a more dynamic process for the business, and we’re really seeing that people are saying, "You’re not just a cost center anymore. You’re enabling us, you’re helping us to do what we need to do and basically doing it on-demand." So our team has really started shining these last few years, especially because of our high virtualization percentage.
Leeper: For a company that's looking to move to this virtualization space, they’ve got to get some wins. You’ve got to tackle some environments or some projects that you can be successful at, and hopefully by partnering with some business users and business owners who are willing to take a little bit of a chance.
If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it. There are a lot of ways that the business, application vendors, and various things can throw some roadblocks in this.
Once you start chipping away at a couple of them and get above the easy stuff, go find one that maybe on paper is a little difficult, but go get that one done. Then you can very quickly point back to success on that piece and start working your way through the rest of them.
Frye: As we were rolling out on some of our Tier 1 mission-critical applications, it was decided by the business that they wanted to test DR. They were going down the path of doing that the old-fashioned way by backing up databases, restoring databases, and taking weeks to do that, days and weeks.
We said, "We think we have a better way with SRM and our replication technologies. We have that data here. Why don't you let us clone that data and stand it up for you?" Literally, within 10 seconds, they had a replica of their data.
So we were enabling them to do their DR testing with SRM, on demand, when they wanted to do that, as well as giving them the benefit of doing the faster cloning and data refreshes. That was just a day-to-day, operational activity that they had no idea we could do for them.
It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins. It's going to specific business units and application owners and saying, "We think we have a better way. What do you think about this?" Once they got their hands on it, just looking at their faces was really a good moment for us.
Gardner: Where do you go next with your virtualization payoff?
Leeper: We consider ourselves having up a private cloud on-site. My team will probably start laughing at me for using that term, but we do believe we have a very flexible and dynamic environment to deploy, based on business request on premises, and we're pretty proud of that. It works pretty well for us.
Where we go next is all over the place. One of the things we're pretty happy about is the fact that we can think about things a little differently now than probably a lot of our peers, because of how migratory our workloads can be, given the virtualization.
We started looking into things like hybrid cloud approaches and the idea of maybe moving some of our workloads out of our premises, our own data facilities, to a cloud provider somewhere else.
For us, that's not necessarily the discussion around the classic public cloud strategies for scalability and some of those things. For us, it's a temporary space at times, if we are, say, moving an office, we want to be able to provide zero downtime, and we have physical equipment on-premises.
It would be nice to be able to shutdown their physical equipment, move their data, move their workloads up to a temporary spot for four or five weeks, and then bring it back at some point, and let users never see an outage while they are working from home or on the road.
There are some interesting scenarios around significant DR for us and locations where we don't have real-time DR set up. For instance, we were looking into some issues in Japan, when Japan unfortunately a year or so ago was dealing with the earthquake and the tsunami fallout in power.
We were looking at how we can possibly move our data out of the country for a period of time, while the infrastructure was stabilizing, specifically power, and then maybe bring it back when things settle down again.
Unfortunately we weren't quite virtual on the edge yet there, but today we think that's something we could do. Thinking about how and where we move data to be at the right place at the right time is where we think the next big win for us.
Then, we get into the application profiles that users are asking for and their ability to spin up environments very quickly to just test something. It lets us get out of having IT as being the roadblock to innovation. A lot of times the business or part of our innovation teams come up with some idea on a concept, an application, or whatever it is. They don't have to wait for IT to fulfill their needs. The environments are right there for them.
So I challenge the teams routinely to think a little bit differently about how we've done things in the past, because our architecture is dramatically different than it was even two years ago.
- Ocean Observatories Initiative: Cloud and Big Data come together to give scientists unprecedented access to essential climate insights
- Case Study: Strategic Approach to Disaster Recovery and Data Lifecycle Management Pays Off for Australia's SAI Global
- Virtualization Simplifies Disaster Recovery for Insurance Broker Myron Steves While Delivering Efficiency and Agility Gains Too
- SAP Runs VMware to Provision Virtual Machines to Support Complex Training Courses
- Case Study: How SEGA Europe Uses VMware to Standardize Cloud Environment for Globally Distributed Game Development
- Germany's Largest Travel Agency Starts a Virtual Journey to Get Branch Office IT Under Control
President Obama recently announced the launch of a new national awareness campaign to "encourage more Americans to move beyond passwords – adding an extra layer of security like a fingerprint or codes sent to your cellphone." The shift from single passwords to multi-factor authentication couldn’t be timelier or more strategic. This session will focus on why passwords alone are no longer effective, and why the time to act is now. In his session at 19th Cloud Expo, Chris Webber, security strateg...
Dec. 9, 2016 04:45 AM EST Reads: 464
"We are an all-flash array storage provider but our focus has been on VM-aware storage specifically for virtualized applications," stated Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 9, 2016 04:30 AM EST Reads: 1,023
The WebRTC Summit New York, to be held June 6-8, 2017, at the Javits Center in New York City, NY, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 20th International Cloud Expo and @ThingsExpo. WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web co...
Dec. 9, 2016 04:15 AM EST Reads: 1,440
Redis is not only the fastest database, but it has become the most popular among the new wave of applications running in containers. Redis speeds up just about every data interaction between your users or operational systems. In his session at 18th Cloud Expo, Dave Nielsen, Developer Relations at Redis Labs, shared the functions and data structures used to solve everyday use cases that are driving Redis' popularity.
Dec. 9, 2016 04:15 AM EST Reads: 3,529
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
Dec. 9, 2016 04:00 AM EST Reads: 563
Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office. In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...
Dec. 9, 2016 04:00 AM EST Reads: 6,336
"We are a custom software development, engineering firm. We specialize in cloud applications from helping customers that have on-premise applications migrating to the cloud, to helping customers design brand new apps in the cloud. And we specialize in mobile apps," explained Peter Di Stefano, Vice President of Marketing at Impiger Technologies, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 9, 2016 03:30 AM EST Reads: 471
Complete Internet of Things (IoT) embedded device security is not just about the device but involves the entire product’s identity, data and control integrity, and services traversing the cloud. A device can no longer be looked at as an island; it is a part of a system. In fact, given the cross-domain interactions enabled by IoT it could be a part of many systems. Also, depending on where the device is deployed, for example, in the office building versus a factory floor or oil field, security ha...
Dec. 9, 2016 03:00 AM EST Reads: 381
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
Dec. 9, 2016 02:15 AM EST Reads: 6,229
The idea of comparing data in motion (at the sensor level) to data at rest (in a Big Data server warehouse) with predictive analytics in the cloud is very appealing to the industrial IoT sector. The problem Big Data vendors have, however, is access to that data in motion at the sensor location. In his session at @ThingsExpo, Scott Allen, CMO of FreeWave, discussed how as IoT is increasingly adopted by industrial markets, there is going to be an increased demand for sensor data from the outermos...
Dec. 9, 2016 02:00 AM EST Reads: 3,050
"Qosmos has launched L7Viewer, a network traffic analysis tool, so it analyzes all the traffic between the virtual machine and the data center and the virtual machine and the external world," stated Sebastien Synold, Product Line Manager at Qosmos, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 9, 2016 01:45 AM EST Reads: 988
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
Dec. 9, 2016 01:45 AM EST Reads: 1,955
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at 20th Cloud Expo, Ed Featherston, director/senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Dec. 9, 2016 01:15 AM EST Reads: 1,660
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busin...
Dec. 9, 2016 01:15 AM EST Reads: 3,977
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...
Dec. 9, 2016 12:45 AM EST Reads: 5,120