Welcome!

Blog Feed Post

How to Achieve Optimal Availability for Microsoft Exchange

How many times do you check your email each hour? Recent studies have shown that the average worker checks email once every 15 minutes, with some users checking email as often as 40 times per hour. In addition, growing use of iPhones, BlackBerrys and similar email-enabled mobile devices means that employees have become attached to their email at all times, with some checking their device as soon as each email arrives. Now that email has evolved into a must-have business communications tool, employees have come to expect access to their email 24x7, with very little tolerance for downtime.

Meeting the “always on” expectations of employees creates challenges for the IT administrator. Service-level agreements (SLAs) are increasingly stringent and demanding as users require non-stop access to email and other collaborative features of Microsoft Exchange. Availability of Exchange is paramount, as well as protecting the integrity of your Exchange data. In order to maintain Exchange availability, every component of the Exchange infrastructure needs to be considered. You can protect your mailbox server to the highest degree, but if your DNS server fails, the Exchange server may not be accessible.

To help your company protect its Exchange environment, Marathon has developed a series of steps for achieving optimal Exchange availability. The tips are designed to help identify what availability levels should be designated in order to achieve Exchange SLA commitments with fewer resources and lower costs.

Define Availability Objectives
Creating availability objectives is an important first step in formulating Exchange protection strategies. This is typically done by establishing Recovery Time Objectives (RTO), the time it takes for an application to be running again, and Recovery Point Objective (RPO), the point in time to which the IT professional can recover data in case of a failure, for your Exchange environment.

RTO and RPO baselines establish the SLAs you commit to for the overall company, business units, or specific internal groups. You may even have different Exchange SLAs for different users within your company. For example, you may have an executive group that requires 24x7 email access, while the rest of the company can withstand Exchange downtime of up to one hour. In addition, consideration should be given to what level of protection is needed for the other components of your Exchange infrastructure, such as Active Directory and DNS servers.

Understanding the Levels of Availability
There are multiple levels of availability to consider for different applications and their support infrastructures, starting with basic failover and recovery, moving up to high availability, and all the way to continuous availability for extremely transaction-sensitive applications.

1. The Recovery level is for those applications for which recovery time (RTO) of a day or more is often acceptable. Some downtime is acceptable, and even significant downtime won’t have a detrimental effect on the business. Assurances that recovery will happen is not a requirement.

2. The High Availability level is the home of the majority of applications that run the business, such as email, CRM, financial systems, and databases. These are systems with high downtime costs, and therefore short RTO requirements. These applications require assurances that they will not be down for extended periods should failures occur.

3. The highest level of availability is Continuous Availability in which even brief moments of downtime or a single lost transaction can be extremely detrimental and/or costly to the client or business.

As you establish availability objectives for different groups of Exchange users, you need to consider the protection requirements for your entire Exchange infrastructure, beyond just the mailbox server. You will need to protect all of the components of the Exchange environment, in addition to the different workloads deployed on the mailbox server. Also, don’t forget that the way your company uses Exchange today might change in the future. You may use Exchange today for general correspondence, but within the next year you may plan to use email to process orders. This adds to the need to have multiple levels of availability to assign to the components of the Exchange infrastructure and Exchange user groups. Additionally you’ll need flexibility to change those levels as your business changes.

Assigning Levels of Availability to Exchange Environments
A meaningful exercise to undertake is to apply various levels of protection to your Exchange infrastructure based on your SLA commitments. First look at the users and their requirements for Exchange access. Do you have a single SLA in place for all users, or do you have multiple user groups with different SLAs? If you have a single SLA in place company-wide, you can deploy those users in workloads based on email usage and assign them a single level of protection. However if you have different SLAs for different business groups, you can divide those into multiple workgroups on the mailbox server based on their SLA requirements.

For example, if you have an executive group that needs a 24x7 uptime, then you should consolidate those executives in a dedicated Exchange workload and assign a level of protection that will provide continuous availability. Sales people can often fall into this category as well, requiring non-stop access to email and Exchange collaboration features. Other employees may have less stringent SLAs in place and would require a lower level of protection.

It is also important to keep the components of Exchange, including the DHCP server, DNS server and Active Directory server, up and running. If one or more of these components goes down, requiring the IT administrator to manually intervene could cause excessive downtime for Exchange and exceed your SLAs. Automatic recovery from failures enables you to keep the Exchange environment operating to meet your SLA commitments. Assigning a level of protection to the supporting systems, including the DNS, DHCP, and Active Directory servers, equivalent to that necessary to meet your Exchange SLAs is as important as protecting the actual Exchange servers. Any single point of failure could bring down a well protected Exchange server.

For remote employees and “road warriors”, your company may also have a BlackBerry Enterprise Server (BES) and/or Client Access Server (CAS) implementation, to serve as a secondary or backup method for remote email access. The BES and CAS implementations should be protected to the level you require based on your remote email access strategy and user SLAs.

Establishing RTO and RPO for SLA commitments, determining the right level of availability protection to meet these commitments, and protecting all components necessary to support an Exchange environment will help create n robust and reliable messaging system.

For an even more detailed look at Marathon’s approach to Exchange high availability, download our “Optimizing Exchange High Availability - A New Approach” white paper or our complete Exchange 2007 High Availability Toolkit.

Read the original blog entry...

More Stories By Jerry Melnick

Jerry Melnick ([email protected]) is responsible for defining corporate strategy and operations at SIOS Technology Corp. (www.us.sios.com), maker of SIOS SAN and #SANLess cluster software (www.clustersyourway.com). He more than 25 years of experience in the enterprise and high availability software industries. He holds a Bachelor of Science degree from Beloit College with graduate work in Computer Engineering and Computer Science at Boston University.

Latest Stories
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists loo...
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists discussed...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"We focus on composable infrastructure. Composable infrastructure has been named by companies like Gartner as the evolution of the IT infrastructure where everything is now driven by software," explained Bruno Andrade, CEO and Founder of HTBase, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...
In this presentation, Striim CTO and founder Steve Wilkes will discuss practical strategies for counteracting fraud and cyberattacks by leveraging real-time streaming analytics. In his session at @ThingsExpo, Steve Wilkes, Founder and Chief Technology Officer at Striim, will provide a detailed look into leveraging streaming data management to correlate events in real time, and identify potential breaches across IoT and non-IoT systems throughout the enterprise. Strategies for processing massive ...