Blog Feed Post

How to Achieve Optimal Availability for Microsoft Exchange

How many times do you check your email each hour? Recent studies have shown that the average worker checks email once every 15 minutes, with some users checking email as often as 40 times per hour. In addition, growing use of iPhones, BlackBerrys and similar email-enabled mobile devices means that employees have become attached to their email at all times, with some checking their device as soon as each email arrives. Now that email has evolved into a must-have business communications tool, employees have come to expect access to their email 24x7, with very little tolerance for downtime.

Meeting the “always on” expectations of employees creates challenges for the IT administrator. Service-level agreements (SLAs) are increasingly stringent and demanding as users require non-stop access to email and other collaborative features of Microsoft Exchange. Availability of Exchange is paramount, as well as protecting the integrity of your Exchange data. In order to maintain Exchange availability, every component of the Exchange infrastructure needs to be considered. You can protect your mailbox server to the highest degree, but if your DNS server fails, the Exchange server may not be accessible.

To help your company protect its Exchange environment, Marathon has developed a series of steps for achieving optimal Exchange availability. The tips are designed to help identify what availability levels should be designated in order to achieve Exchange SLA commitments with fewer resources and lower costs.

Define Availability Objectives
Creating availability objectives is an important first step in formulating Exchange protection strategies. This is typically done by establishing Recovery Time Objectives (RTO), the time it takes for an application to be running again, and Recovery Point Objective (RPO), the point in time to which the IT professional can recover data in case of a failure, for your Exchange environment.

RTO and RPO baselines establish the SLAs you commit to for the overall company, business units, or specific internal groups. You may even have different Exchange SLAs for different users within your company. For example, you may have an executive group that requires 24x7 email access, while the rest of the company can withstand Exchange downtime of up to one hour. In addition, consideration should be given to what level of protection is needed for the other components of your Exchange infrastructure, such as Active Directory and DNS servers.

Understanding the Levels of Availability
There are multiple levels of availability to consider for different applications and their support infrastructures, starting with basic failover and recovery, moving up to high availability, and all the way to continuous availability for extremely transaction-sensitive applications.

1. The Recovery level is for those applications for which recovery time (RTO) of a day or more is often acceptable. Some downtime is acceptable, and even significant downtime won’t have a detrimental effect on the business. Assurances that recovery will happen is not a requirement.

2. The High Availability level is the home of the majority of applications that run the business, such as email, CRM, financial systems, and databases. These are systems with high downtime costs, and therefore short RTO requirements. These applications require assurances that they will not be down for extended periods should failures occur.

3. The highest level of availability is Continuous Availability in which even brief moments of downtime or a single lost transaction can be extremely detrimental and/or costly to the client or business.

As you establish availability objectives for different groups of Exchange users, you need to consider the protection requirements for your entire Exchange infrastructure, beyond just the mailbox server. You will need to protect all of the components of the Exchange environment, in addition to the different workloads deployed on the mailbox server. Also, don’t forget that the way your company uses Exchange today might change in the future. You may use Exchange today for general correspondence, but within the next year you may plan to use email to process orders. This adds to the need to have multiple levels of availability to assign to the components of the Exchange infrastructure and Exchange user groups. Additionally you’ll need flexibility to change those levels as your business changes.

Assigning Levels of Availability to Exchange Environments
A meaningful exercise to undertake is to apply various levels of protection to your Exchange infrastructure based on your SLA commitments. First look at the users and their requirements for Exchange access. Do you have a single SLA in place for all users, or do you have multiple user groups with different SLAs? If you have a single SLA in place company-wide, you can deploy those users in workloads based on email usage and assign them a single level of protection. However if you have different SLAs for different business groups, you can divide those into multiple workgroups on the mailbox server based on their SLA requirements.

For example, if you have an executive group that needs a 24x7 uptime, then you should consolidate those executives in a dedicated Exchange workload and assign a level of protection that will provide continuous availability. Sales people can often fall into this category as well, requiring non-stop access to email and Exchange collaboration features. Other employees may have less stringent SLAs in place and would require a lower level of protection.

It is also important to keep the components of Exchange, including the DHCP server, DNS server and Active Directory server, up and running. If one or more of these components goes down, requiring the IT administrator to manually intervene could cause excessive downtime for Exchange and exceed your SLAs. Automatic recovery from failures enables you to keep the Exchange environment operating to meet your SLA commitments. Assigning a level of protection to the supporting systems, including the DNS, DHCP, and Active Directory servers, equivalent to that necessary to meet your Exchange SLAs is as important as protecting the actual Exchange servers. Any single point of failure could bring down a well protected Exchange server.

For remote employees and “road warriors”, your company may also have a BlackBerry Enterprise Server (BES) and/or Client Access Server (CAS) implementation, to serve as a secondary or backup method for remote email access. The BES and CAS implementations should be protected to the level you require based on your remote email access strategy and user SLAs.

Establishing RTO and RPO for SLA commitments, determining the right level of availability protection to meet these commitments, and protecting all components necessary to support an Exchange environment will help create n robust and reliable messaging system.

For an even more detailed look at Marathon’s approach to Exchange high availability, download our “Optimizing Exchange High Availability - A New Approach” white paper or our complete Exchange 2007 High Availability Toolkit.

Read the original blog entry...

More Stories By Jerry Melnick

Jerry Melnick ([email protected]) is responsible for defining corporate strategy and operations at SIOS Technology Corp. (www.us.sios.com), maker of SIOS SAN and #SANLess cluster software (www.clustersyourway.com). He more than 25 years of experience in the enterprise and high availability software industries. He holds a Bachelor of Science degree from Beloit College with graduate work in Computer Engineering and Computer Science at Boston University.

Latest Stories
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
"WineSOFT is a software company making proxy server software, which is widely used in the telecommunication industry or the content delivery networks or e-commerce," explained Jonathan Ahn, COO of WineSOFT, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...