|By Keith Mayer||
|August 29, 2012 07:00 AM EDT||
In my Private Cloud talks with IT Pros, how best to virtualize Microsoft Exchange workloads, regardless of underlying hypervisor, has been a recurring hot topic area of discussion. At Microsoft Tech Ed 2012, Jeff Mealiffe, Senior Program Manager on the Exchange team and responsible for Exchange virtualization guidance, delivered a great session on "Best Practices for Virtualizing Microsoft Exchange Server 2010". This is a great resource to study when planning virtualized Exchange deployments! Below, I've included the following items to help you get started with virtualizing Exchange:
- The recorded session video
- A link to the downloadable slide deck
- An indexed recap of my session notes
- Links to additional tools and resources that I've personally found helpful
I'm definitely looking forward to building my next virtualized deployment of Exchange 2010 on Windows Server 2012 RTM when it releases on Sept 4th! The increased VM resource densities in Hyper-V v3 of up to 64 virtual processors and 1TB RAM per virtual machine will be a big boost to virtualizing mission critical heavy-duty workloads like Exchange mailbox server roles and multi-role servers.
Download a copy of the session slide deck.
Session Notes with Video Index
- Supported Exchange Virtualization Scenarios [ 4:00 ]
- Exchange 2010 SP1 or later
- Hyper-V or any hypervisor in the Server Virtualization Validation Program (SVVP) - link provided below.
- Items Not Supported when Virtualizing Exchange [ 7:00 ]
- Hypervisor snapshots
- Differencing / Delta disks
- CPU oversubscription in a ratio > 2:1
- Applications running on the parent / root partition
- VSS backups of VMs from root
- NAS storage of virtual disk files
- JetStress Support in Virtualized Environments [ 12:10 ]
- Supported in VMs on Microsoft Windows Server 2008 R2 or later
- Supported in VMs on Microsoft Hyper-V Server 2008 R2 or later
- Supported in VMs on VMware ESX 4.1 or later
- More Info - http://bit.ly/HP8G0f
- Big Problems to Avoid for Production Exchange VMs [ 17:53 ]
- Dynamic Memory / Memory Overcommit [ 18:00 ]
- VM Snapshots [ 31:57 ]
- CPU Oversubscription [ 35:05 ]
- Overview of Best Practices [ 38:05 ]
- Hypervisor adds CPU overhead - 10-12% in our Exchange 2010 tests [ 39:22 ]
- Size for physical and provide those resources to each VM [ 40:31 ]
- Exchange is architected for scale-out scenarios, avoid "all eggs in one basket" [ 40:48 ]
- Resource Sizing [ 43:16 ]
- Start with physical sizing process - use calculator (listed below)
- Account for virtualization overhead (10-12%)
- Determine VM placement to account for HA
- Size root servers, storage and network infrastructure
- Guest VM sizing [ 47:18 ]
- Size Mailbox role first - other role sizes factored from Mailbox server requirements
- Considerations for use of Multi-role servers - Mailbox, Hub and CAS roles on single VM
- Unified Messaging Sizing [ 49:08 ]
- Min 4 Virtual Processors (VP)
- VM with 4VP & 16GB memory can handle 40 concurrent calls with Voice Mail Preview (65 calls without)
- Storage Decisions [ 52:47 ]
- Exchange storage separate from Guest OS virtual disk physical storage
- Must be fixed virtual disk, SCSI pass-through (RDM) or iSCSI (terminated at host or guest)
- SCSI pass-through (RDM) recommended to host queues, DBs and logfile streams unless using Hyper-V Live Migration where CSV is recommended
- Must be block-level storage - NAS volumes not supported
- Virtual Processors [ 56:04 ]
- Prefer smaller number of multi-core VMs vs many single-core VMs
- Don't assume that a hyperthreaded (SMT) CPU is a full CPU core
- Private Cloud [ 57:08 ]
- Good model for providing virtual infrastructure resources to Exchange, but be careful with "dynamic" cloud capabilities
- Be prepared to apply different resource management policies to Exchange VMs
- Host-based Failover Clustering [ 59:41 ]
- Not an "Exchange Aware" HA Solution - Does not provide HA in the event of storage failure / data corruption
- If using, combine with DAG when possible to provide maximum HA - Admin can re-balance DAG after failover to redistribute
- VM Live Migration and Exchange [ 01:04:50 ]
- DAG does not need to be dynamically re-balanced
- Use CSV rather than pass-through LUNS for all Mailbox VM storage
- Consider relaxing cluster heartbeat timeouts (5 seconds = default, 30 seconds = max recommended)
- Size network appropriately for Live Migration
- VM Placement [ 01:08:26 ]
- Don't co-locate DAG database copies on same physical hosts
- Distribute VMs running same roles to different physical hosts
- If not using multi-role VM's, consider isolating mailbox and hub role VMs on separate physical hosts if possible
Additional Tools and Resources
- Exchange virtualization supportability guidance - http://technet.microsoft.com/en-us/library/jj126252.aspx
- Understanding Exchange Performance - http://technet.microsoft.com/en-us/library/dd351192
- Exchange 2010 Mailbox Server Role Requirements Calculator - http://blogs.technet.com/b/exchange/archive/2010/01/22/updates-to-the-exchange-2010-mailbox-server-role-requirements-calculator.aspx
- Exchange JetStress and Load Generator Tools - http://technet.microsoft.com/en-us/library/dd335108
- Server Virtualization Validation Program - http://www.windowsservercatalog.com/svvp/
- Exchange 2010 Tested OEM Solutions (on Hyper-V)
- HP Configurations
- DELL Configurations
- Unisys Configurations
- Unisys ES7000 Servers for 15,000 users: http://bit.ly/kOBSuo
- EMC Configurations
- EMC Unified Storage and Cisco Unified Computing System for 32,000 users - http://bit.ly/9DBfoB
|Build Your Lab! Download Windows Server 2012|
|Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines|
|Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group|
|Do IT Pros ROCK? Please VOTE for us!|
Providing the needed data for application development and testing is a huge headache for most organizations. The problems are often the same across companies - speed, quality, cost, and control. Provisioning data can take days or weeks, every time a refresh is required. Using dummy data leads to quality problems. Creating physical copies of large data sets and sending them to distributed teams of developers eats up expensive storage and bandwidth resources. And, all of these copies proliferating...
Jan. 19, 2017 05:45 AM EST Reads: 3,656
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw...
Jan. 19, 2017 05:30 AM EST Reads: 1,888
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is rele...
Jan. 19, 2017 05:30 AM EST Reads: 4,134
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Jan. 19, 2017 05:15 AM EST Reads: 5,007
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Jan. 19, 2017 04:15 AM EST Reads: 3,445
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
Jan. 19, 2017 04:00 AM EST Reads: 5,356
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
Jan. 19, 2017 03:30 AM EST Reads: 399
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackIQ...
Jan. 19, 2017 01:15 AM EST Reads: 7,806
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between wh...
Jan. 19, 2017 01:15 AM EST Reads: 6,099
Security, data privacy, reliability, and regulatory compliance are critical factors when evaluating whether to move business applications from in-house, client-hosted environments to a cloud platform. Quality assurance plays a vital role in ensuring that the appropriate level of risk assessment, verification, and validation takes place to ensure business continuity during the migration to a new cloud platform.
Jan. 19, 2017 01:00 AM EST Reads: 1,282
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 19, 2017 12:45 AM EST Reads: 4,700
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
Jan. 19, 2017 12:00 AM EST Reads: 4,177
One of the hottest areas in cloud right now is DRaaS and related offerings. In his session at 16th Cloud Expo, Dale Levesque, Disaster Recovery Product Manager with Windstream's Cloud and Data Center Marketing team, will discuss the benefits of the cloud model, which far outweigh the traditional approach, and how enterprises need to ensure that their needs are properly being met.
Jan. 18, 2017 11:15 PM EST Reads: 4,479
The security needs of IoT environments require a strong, proven approach to maintain security, trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vic...
Jan. 18, 2017 09:45 PM EST Reads: 6,519
Big Data, cloud, analytics, contextual information, wearable tech, sensors, mobility, and WebRTC: together, these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at @ThingsExpo, Erik Perotti, Senior Manager of New Ventures on Plantronics’ Innovation team, provided an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it m...
Jan. 18, 2017 09:30 PM EST Reads: 5,748