Troubleshooting Outlook Connectivity in Exchange 2013 and 2016
Some of the most common and longest running calls we see in support are related to Outlook
connectivity to Exchange. There are several variances of these issues that can be classified as
connectivity. The most common are as follows:
Clients prompted for credentials, clients disconnected, clients unable to connect. Each one of these can
also be broken down into other effect such as clients intermittently prompted or clients freezing or
unresponsive. There are so many facets of these scenarios that it's difficult to pinpoint each one
however, each one could take different troubleshooting paths. In this particular post we will focus on
clients disconnected, freezing or unresponsive and in certain cases, they could also see intermittent
prompts for credentials.
In every case, it's best to spend some time understanding what the user is actually experiencing.
Understanding the full client experience and effect can help determine the logging and troubleshooting
path. When you define the users issue, you should also run the Microsoft Office Configuration Analyzer
Tool (OffCAT)of the client to have a good view of the configuration and implications that can impact
their experience.
Client Configuration
Here are some examples of things to consider when troubleshooting these issues.
1. Are the clients connecting with Outlook Anywhere or MAPI/HTTP? Try to use the Outlook
“Connection Status” window to access more information on the user’s connection by holding
down the CTRL key and clicking the Outlook icon in the system tray so that the “Connection
Status” window appears. Here you can see the protocol, the connection type, and so on
RPC/HTTP indicates Outlook Anywhere, whereas HTTP indicates MAPI/HTTP.
2. Are they online or cached mode?
3. What devices are the clients traversing to hit the CAS? Run Tracert to the CAS to determine the
devices in the path of the client.
4. What is the user attempting to do when they experience issues? Accessing a calendar, public
folders or just moving around in the Inbox.
5. Are the effected mailboxes on the same database, same servers?
6. Outlook full build number. Always make sure that you are working with an updated client.
Information about recent updates can be found in the following references
http://support.microsoft.com/en-us/kb/2625547/en-us#2010
http://social.technet.microsoft.com/wiki/contents/articles/31133.outlook-and-outlook-
for-mac-update-build-numbers.aspx
7. Using the OffCAT output (requested earlier in this blog), examine what Addin’s are being used
and consider disabling those for testing. Make sure that you restart Outlook and verify the
addin’s remain disabled.
8. Move extremely important users to cached mode and enable Bitlocker for additional security.
CAS Configuration
The first step for the Exchange server checks is to review all the settings noted in TechNet, Exchange
2013 Sizing and Configuration Recommendations. Several items such as setting power management to
“High Performance” and verify the OS isn't turning off power to the NIC,.NET updates, and so on are
extremely important and each setting are within Microsoft best practices guidelines and are considered
necessary enable optimal Exchange performance.
1. Run the Exchange Performance Health Checker script. Review the output for any results that
have to be updated.
2. Exchange 2013 should be running .NET 4.5.2
3. Turn off hyper-threading
4. Update outdated clients https://support.microsoft.com/en-us/kb/2625547#2010
5. Update NIC drivers
6. Eliminate any slow disk issues identified in Perfmon
7. Set Power Management to “High Performance”
8. Locate the NIC “Properties |Configure | Power Management” and verify the box is unchecked
for “Allow the computer to turn off this device to save power”
9. Verify that all the noted hotfixes are installed from Exchange 2013 Sizing and Configuration
Recommendations.
10. CAS- Keep Alive values should be set to 30 minutes and no less than 15 minutes. If there's no entry in the
registry for KeepAliveTime then the value is 2 hours. This value if not set correctly can effect both
connectivity and performance as noted in KB, Unable to connect using Exchange ActiveSync due to
Exchange resource consumption. You must make sure that the load balancer and any other
devices in the path from client to CAS be set correctly. More information on load balancer
settings are listed later in this document under Load Balancing Configuration section. The goal is
to set CAS with the lowest value so that client sessions when ended, are ended by CAS and not
by a device.
Path: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\
Value name: KeepAliveTime
Value Type: 1800000 (30 minutes, in milliseconds) (Decimal)
11. Max Core Count - In Exchange 2013 and 2016, you can encounter performance problems if you go too far
off of the preferred architecture, particularly when referring to core count. This can even include have
too many cores. The maximum number of cores on a server should be no more than 24. Hyper-
Threading can artificially inflate this value so it's important to disable it as mentioned in step 2. For more
information refer to the following articles:
Troubleshooting High CPU utilization issues in Exchange 2013
http://blogs.technet.com/b/exchange/archive/2015/04/30/troubleshooting-high-cpu-utilization-issues-in-
exchange-2013.aspx
Ask the Perf Guy: How big is too BIG?
http://blogs.technet.com/b/exchange/archive/2015/06/19/ask-the-perf-guy-how-big-is-too-big.aspx
12. Preferred Architecture- http://blogs.technet.com/b/exchange/archive/2014/04/21/the-
preferred-architecture.aspx
Load Balancer Configuration
1. Verify the client TCP Idle Time-Out is a slightly larger value than the Keep Alive setting on CAS
noted in #6 earlier.
In this example, we are using the 30-minute Keep Alive on CAS and we have both a firewall and
load balancer in front of the clients. Here is the connection path.
ClientsFirewall Load Balancer CAS
In this example, if you have a firewall in the path from client to CAS, we are
referencing the firewall “idle” time out and not the persistence time out. This value
should be greater than the load balancer and the load balancer time out should be
greater than CAS. Note that it is not recommended to go below 15 minutes for
Keep Alive on CAS or TCP idle timeout on the load balancer.
Firewall time out = 40 minutes
LB TCP Idle time out = 35 minutes
CAS Keep Alive = 30 minutes
2. If the load balancer supports it, the preferred option is to configure it to use “Least
Connections” with “Slow start” during typical operation. Be aware that least connections can
cause another CAS to be overloaded during patching or maintenance however, round robin
could be used during troubleshooting it would not be the optimal solution because it will have
slow convergence causing recovery delays when clients try to reconnect. The TechNet Exchange
2013 Sizing and Configuration Recommendations describes the differences as:
A hardware or software load balancer should be used to manage all inbound traffic to
Client Access servers. The selection of the target server can be determined with methods
such as “round-robin,” in which each inbound connection goes to the next target server
in a circular list, or with “least connections,” in which the load balancer sends each new
connection to the server that has the fewest established connections at that time. These
methods are detailed further in the following blog Load Balancing in Exchange 2013 and
TechNet Load balancing.
3. For the ActiveSync persistence setting, set the load balancer to use “Authorization header
cookie" to avoid one CAS becoming overloaded because source IP will send all the connections
to one server. http://blogs.technet.com/b/mikehall/archive/2012/09/05/why-the-correct-load-
balancing-persistence-is-so-important-in-exchange-server-2010.aspx
Virtual Server Configuration
1. Verify that the server is updated to avoid the known issues with packet loss.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&ext
ernalId=2039495
2. VMXNet3 resets frequently when RSS is enabled on a Windows virtual machine.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&ext
ernalId=2055853
Additional Troubleshooting Steps
As soon as you have completed the previous steps and you are still experiencing issues, the following
data is necessary.
1. Start Perfwiz on CAS and Mailbox server to run for 4 hours during the busiest part of the day.
Download ExPerfwiz from here, http://experfwiz.codeplex.com/ . Example command line:
\experfwiz.ps1 -server MBXServer -interval 10 -filepath D:\Logs
2. Review the application logs for any 4999 events that occurred around the time of outage.
3. In the Application log, review the 2080 event to make sure that all Domain Controllers are
responding with the correct Boolean values. If there are any responses that are not accurate,
the DC’s should be repaired or excluded.
The expected values should be “CDG 1 7 7 1 0 1 1 7 1” as shown in the following table.
For testing, the DC can be excluded by using the Set-Exchangeserver –
StaticExcludedDomainControllers parameter as shown in this section on the Exchange Server
however, troubleshooting the GC access should also be done as soon as your testing is
completed. As soon as you statically exclude a GC, it will take effect immediately and it will be
viewable on the next 2080 Event ID with all zero values.
The PDC column should show 0 and not be used by Exchange, https://support.microsoft.com/en-
us/kb/298879
More information for Set-ExchangeServer is located here, http://technet.microsoft.com/en-
us/library/bb123716.aspx
More information about Event ID 2080 from MSExchangeDSAccess
http://support.microsoft.com/kb/316300/EN-US
4. When in the Application log, also check for Event ID 2070 or 2095. A 2070 event occurs when
Exchange tries to query a DC and fails and it cannot contact/communicate with the DC. If this
event occurs during an outage or frequently then it should be investigated. If you only see this
event occasionally over several days, it could only be a result of the DC being restarted during
maintenance and so on The same is true with 2095, infrequent isn't a concern but continued
logging of this event could be a sign of a problem.
5. Always ensure MaxConcurrentApi bottlenecks are not present in the environment. To avoid this
problem now or in future review the following information
You are intermittently prompted for credentials or experience time-outs when you connect to
Authenticated Services: http://support.microsoft.com/kb/975363
Updated - NTLM and MCA Concerns: http://blogs.technet.com/b/ad/archive/2008/09/23/ntlm-
and-maxconcurrentapi-concerns.aspx
The following blog details troubleshooting, diagnosing and tuning issues related to
MaxConcurrentAPI http://blogs.technet.com/b/askpfeplat/archive/2014/01/13/quick-reference-
troubleshooting-diagnosing-and-tuning-maxconcurrentapi-issues.aspx
6. LDAP latencies can impact server and client performance. When you work with client
connectivity issues, the LDAP counter can help point delays with communications to the DC’s.
These are under the MSExchange ADAccess Domain Controllers(*) LDAP Read Time and LDAP
Search Time, and is recommended that the average be within 50ms and spikes no greater than
100ms. More information on locating issues can be found in
http://blogs.technet.com/b/askpfeplat/archive/2015/05/11/how-to-find-expensive-inefficient-
and-long-running-ldap-queries-in-active-directory.aspx
Coexistence of Exchange 2013 with Exchange 2010 and 2007
In order to coexist with Exchange 2013, certain configuration steps are necessary. This section outlines
typical organization changes that are needed to connect through an Exchange 2013 CAS.
1. Verify the legacy servers are at a supported Service Pack and RU.
Exchange 2010 SP3 RU6 or later versions. This RU version will increase as the supported RU’s
increase.
Exchange 2007 SP3 RU10 or later versions. This RU version will increase as the supported RU’s
increase.
The following link was created when Exchange 2013 released and has the minimum
requirements for versions. However, we still require newer updates to avoid known issues.
https://technet.microsoft.com/en-us/library/aa996719(v=exchg.150).aspx
2. If Outlook Anywhere is not enabled on legacy Exchange servers, we recommend that you enable
Outlook Anywhere on every CAS in the organization with NTLM authentication for
ClientAuthenticationMethod and NTLM and Basic for IISAuthenticationMethods. The external
host name should be the DNS name of the Exchange 2013 CAS external URL.
Example: Enable-OutlookAnywhere -Server 'ConE10' -ExternalHostname Mail.Contoso.com' -
ClientAuthenticationMethod 'Ntlm' -SSLOffloading $false –IISAuthenticationMethod Basic, NTLM
3. Configure the Exchange 2010 SCP for AutoDiscover to point to Exchange 2013 CAS. The
AutoDiscover SCP is used for the internal clients only. In some cases, customers can just update
DNS to point to Exchange 2013. DNS would have to point AutoDiscover to Exchange 2013 for all
the external clients also. We do not recommend that you use separate URL’s for the legacy
mailboxes. All connections should use an Exchange 2013 CAS.
To set the SCP for AutoDiscover:
Example: Set-ClientAccessServer ConE10 -AutoDiscoverServiceInternalUri
https://Mail.Contoso.com/autodiscover/autodiscover.xml
4. Verify all legacy CAS are pointed to 2013 for the SCP AutoDiscover URI.
Example: Get-ClientAccessServer |fl *uri*
AutoDiscoverServiceInternalUri : https://Mail.Contoso.com/autodiscover/autodiscover.xml
5. Be aware that Exchange 2007 mailboxes will access EWS and OAB by using
“Legacy.Domain.com” as discussed in,
http://blogs.technet.com/b/exchange/archive/2014/03/12/client-connectivity-in-an-exchange-
2013-coexistence-environment.aspx
Known issues in Coexistence
1. Mailboxes located on 2010 Unable to Connect through Exchange 2013,
https://support.microsoft.com/en-us/kb/2990117
2. Users may be prompted for credentials when accessing additional mailboxes,
calendars or Public Folders on Exchange 2010 server. See # 2 above indicating
that NTLM is not enabled on legacy CAS for Outlook Anywhere.
Troubleshooting Logs and Tools
HTTP Proxy RPCHTTP Logs
In Exchange 2013, there are several logs in the logging folder. For Outlook clients one of the first logs to
examine are the HTTP Proxy logs on CAS. The connection walk through section shows the process that is
used to connect to Exchange 2013. This complete process is logged in the HTTP Proxy log. Also, if it is
possible, add Hosts file to the client for one CAS to reduce the number of logs.
The logs are located on CAS here: C:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\RpcHttp
HTTP Proxy AutoDiscover Logs
Also note Exchange 2013 has HTTP Proxy logs for AutoDiscover that are similar to the logs shown earlier
that can be used to determine whether AutoDiscover is failing.
The logs are located on CAS here: C:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\AutoDiscover
HTTP Error Logs
HTTP Error logs are failures that occur with HTTP.SYS before hitting IIS. However, not all errors for
connections to web sites and app pools are seen in the httperr log. For example, if ASP.NET threw the
error it may not be logged in the HTTP Error log. By default, HTTP error logs are located in
C:\Windows\System32\LogFiles\HTTPERR. Information on the httperr log and the codes -
https://support.microsoft.com/en-us/kb/820729/
IIS Logs
IIS logs can be used to review the connection for RPC/HTTP, MAPI/HTTP, EWS, OAB, and AutoDiscover.
The full data for the MAPI/HTTP and RPC/HTTP is not always put in the IIS logs. Therefore, there is a
possibility that the 200 connection successful may not be seen. IIS codes -
https://support.microsoft.com/en-us/kb/943891
In Exchange 2013 the IIS logs on the CAS should contain all user connections on port 443. The IIS logs on
the Mailbox server should only be connections from the CAS server on port 444.
Most HTTP connections are first sent anonymously which results in a 401 challenge response. This
response includes the authentication types available in the response header. The client should then try
to connect again by using one of these authentication methods. Therefore, a 401 status found inside an
IIS log does not necessarily indicate an error.
Note that an anonymous request is expected to show 401 response. You can identify anonymous
requests because the domain\username is not listed in the request
RPC Client Access (RCA) Logs
The RCA logs can be used to find when a user has made a connection to their mailbox, or a connection
to an alternate mailbox, errors that occur with the connection, and more information. The RCA logs are
located in the logging directory which is located at %ExchangeInstallPath%\Logging\RpcClientAccess. By
default, these logs have a maximum size of 10MB, roll over when size limit is reached or at the end of
the day (based on GMT), and the server keeps 1GB in the log directory.
Outlook ETL Logging
ETL logs are located in %temp%/Outlook Logging and are named Outlook-#####.ETL. The numbers are
randomly generated by the system.
To enable Outlook logging
In the Outlook interface:
Open Outlook.
Click File, Options, Advanced.
Enable “Enable troubleshooting logging (requires restarting Outlook)”
Restart Outlook.
How to enable Outlook logging in the registry:
Browse to HKEY_CURRENT_USER\Software\Microsoft\Office\xx.0\Outlook\Options\Mail
DWORD: EnableLogging
Value: 1
Note: xx.0 is a placeholder for your version of Office. 15.0 = Office 2013, 14.0 = Office 2010
ExPerfwiz (Perfmon for Exchange)
You can use Perfmon for issues that you suspect are caused by performance.
http://experfwiz.codeplex.com/
Exchange 2013 has daily performance logs that captures the majority of what is needed. These logs are
located in C:\Program Files\Microsoft\Exchange Server\V15\Logging\Diagnostics\DailyPerformanceLogs
Log Parser Studio
Log Parser Studio is a GUI for Log Parser 2.2. LPS greatly reduces complexity when parsing logs.
Additionally, it can parse many kinds of logs including:
IIS Logs, HTTPErr Logs, Event Logs (both live and EVT/EVTX/CSV), All Exchange protocol logs from 2003-
2013, any text based logs, CSV logs and ExTRA traces that were converted to CSV logs. LPS can parse
many GB of logs concurrently (we have tested with total log sizes of >60GB).
Download: https://gallery.technet.microsoft.com/office/Log-Parser-Studio-cd458765
Blog with Tips/How To’s: http://blogs.technet.com/b/karywa/