Wednesday, December 30, 2009

To virtualize Exchange or not to virtualize Exchange?

A very common question I hear when talking to customers is whether or not they should run Exchange in a virtualized environment. The answer I give them is multifold: yes, there are benefits to running Exchange virtually, but you must carefully think first about the prerequisites and make sure it is the best move for your environment.

Virtualization is not magical. Some people have the misunderstanding that because you run virtualized, you don’t need to size your servers. This line of thinking will quickly get you in trouble. You still need to figure out how much RAM, CPU and storage you need, both for volume and for performance reasons. The rules are simple: scale your server in the same way you would if you were using physical hardware, and then apply it to your virtual server.
If you figure out that your server is going to need 16GB of RAM and 1000 IOPS, then make sure that the virtualized server has the same resources available; otherwise, your Exchange servers will have performance issues.

Performance. Think about what virtualization environments look like today when considering your hardware’s performance, you should take into account how virtualized environments are deployed. Most companies deploy one or more physical servers, possibly in a clustered configuration, hosting several virtualized servers on each physical server. This means that each physical server must be able to handle the load for every virtual server instance running on the physical one.
With today’s hardware, this is most likely not a problem if you think about CPU or memory (CPU and RAM are not that expensive and can be added later if needed); but when it comes to storage, that is another story. Every server needs disks, both for booting from and for saving the application data to. If running 5 servers on one physical, then the physical server must have 5 times the disk volume—always keep in mind the importance of performance.
For example, let’s consider the following configuration:
Imagine that you have 5 virtualized servers, each having a 50GB disk for OS and a 100GB disk for storing data, which is 150GB times 5 servers and you end up with 750GB. This is no problem for modern disks since a single disk can easily hold 750GB of data. But if you would have run those 5 servers on physical hardware, then you might have put in 4 spindles and created 2 mirrors with 2 disks each. This would render you a fairly good performance on disk. Then if you also have 5 servers with this configuration, that results in 20 disks.

Now compare that disk performance to the single disk performance. Exchange designers have for a long time been used to and forced to think in number of spindles instead of volume because of the disk performance, but this knowledge isn’t that widespread and there is a chance that the people who maintain the virtualization platforms don’t have this knowledge. With this being said, you most likely end up with your virtualized environment connected to a pretty beefy storage containing a lot of disks to withstand the IO performance need.

Virtual hosts affect each other. With several virtual machines running on a single hardware, one virtual machine that is running very high on CPU can drain the physical server on CPU resources, leaving the other virtual machines with little to no CPU resources, causing them to perform slowly. Virtual platforms have configuration settings to limit this behavior, both from draining resources and for maintaining some amount of resources for virtual machines.
This applies to not just CPU resources, but to all other resources also shared by the physical server.

Virtualization adds complexity. By now you’ve gathered that virtualization adds complexity. The most likely scenario is that you end up with a bunch of virtualized servers with disk files located on a SAN. There is also a good chance that you need some more education to maintain not just the ordinary Exchange server environment, but also to manage the virtualization platform as well as the SAN infrastructure.
There is always going to be a time when something happens to the environment for unknown reasons, which inevitably leads to really complex troubleshooting. In smaller companies there is often one person maintaining everything in IT; but in larger companies there are several departments maintaining one piece of the puzzle, making troubleshooting even more complex! Situations might arise that involve several people blaming each other, saying things like ‘There is nothing wrong with my SAN!’ I’m sure many of you have taken part of such conversations.

Virtualization is not free. The complexity mentioned above is not free, unfortunately. Education costs money and time. If you also add the time spent on maintaining multiple systems and most likely some troubleshooting to get them working together, you can easily see that it will cost more money than a standard windows server with Exchange on it. An ordinary windows technician should be able to maintain a standard windows box with perhaps some local attached disk drives.
On the other hand, what can be free under some circumstances is the software license for the virtualization platform. Keep in mind, however, that this is often a small amount of money compared to all the labor and education costs that will be incurred. Plus, there might be costs associated with putting the environment in a SAN.

Flexibility. Virtualization technology is great for flexibility. It is often very simple to add resources such as disk or memory to a virtualized server, and if done correctly it is easy to add servers as they are needed. With the easy provisioning of servers, virtualization is great for lab environment where you often need to add or restore servers quickly. In a lab you can also test things and don’t need to be afraid of breaking your system since it is easy to restore them.
There is always the element of patching a running system. The process for patching a virtualized server is the same as for a physical one. Some people argue with this and say that they can do a snapshot of the server before patching and if something breaks they will just roll back the snapshot. This, however, is not feasible with Exchange since it relies on the configuration being in Active Directory.

Consider this scenario:
There is a patch for Exchange available that you want to install, so you do a snapshot of your virtualized Exchange server and apply the patch. After the reboot or restart of services you notice that the patch doesn’t work in your environment. ‘No problem,’ you think, ‘I will just go back to my snapshot.’ This is not the best course of action, however. Not only because you will “go back in time”, making people lose mail between the time when your snapshot was taken up to the present time, but also because this patch wrote or changed something in Active Directory. By going back, the installation before the snapshot doesn’t like the information being present in Active Directory, causing Exchange to fail. I don’t think this will be a common problem with Exchange, but it is something to be aware of. My recommendation is to do a snapshot of AD and Exchange at the same time as the rollback, not separate.
Exchange supportability Running virtualized is not limited to use of Microsoft technology with Hyper-V. Microsoft has a program called Microsoft Server Virtualization Validation Program (SVVP) This program allows other vendors to go through tests so that their virtualization technology can be validated and approved by Microsoft. Being a vendor that is a member of SVVP makes it supportable by Microsoft to run Exchange on; therefore you are not limited to Hyper-V.

Microsoft has published a document with policies and recommendations for running Exchange 2007 and Exchange 2003 virtualized, and can be found here:
Information about Exchange 2010 will be published shortly, but in essence it will be very much the same as for Exchange 2007.
This document should be read carefully by the people doing Exchange design work, otherwise you may be out of support from Microsoft. Most important point to remember is that your storage and high availability are designed correctly.

Security is of course something to think about, and the virtual servers we treat the same way as if they were physical. Those servers must be protected and patched just like any other server. There is also the challenge of who can access and manage the virtualized environment? Running Exchange in a virtualization environment adds complexity in terms of security, and is definitely something to consider carefully.

Ask yourself some questions:
What if someone could get access to the physical servers and simply copy the disk file? That person could then sit quietly and try to withdraw data from the file. Thus, it is important to think about permission on files, servers and management tools.
What about if you move a virtualized Exchange server to a physical server that is not located inside a locked computer room, or to a hardware that doesn’t have the needed resources? As you can see, security around virtualization involves many components.

Now that you have all the information, let’s return to the original question about running Exchange in a virtualized environment. Ready for my final answer? Yes, it can be done and many benefits will be incurred as a result, but you must pay close attention to the design outside of Exchange and keep in mind all the prerequisites, pitfalls, and misconceptions that you could face.

Friday, November 27, 2009

Looking for help upgrade to Exchange 2010

If you’re looking for information to upgrade from an earlier version of Exchange to Exchange 2010.

One of the first thing you should do is to visit Exchange 2010 Deployment Assistant. This is a wizard and step by step guidance on things you need to do before/during and after the upgrade from Exchange 2003/2007 to Exchange 2010.

Try it out, it will really help you understand what you need to do.

Monday, November 9, 2009

Exchange 2010 released to public

Exchange 2010 has now been released to general public. Here is the Exchange 2010 download link that i promised in my previous post Exchange 2010 is code complete

As Exchange 2010 is now officially release to public Forefront Security for Exchange 2010 is also released. Download forefront for Exchange here

Wednesday, October 14, 2009

Exchange 2010: The New Archiving Feature

There is a lot of buzz surrounding the new archiving feature in Exchange Server 2010. But where there is buzz there are always the unavoidable rumors and misunderstandings surrounding the new feature.

When you ask an Exchange administrator about archiving, most of them think of the archiving product as a tool that replaces messages and/or attachments with a shortcut often called a stub, and then takes the original item and stores it in the archive system. This is a deeply-entrenched misunderstanding, and when Microsoft revealed that the Exchange 2010 archiving function does not take items away from Exchange store, people started to shout and exclaim, This is not a true archiving solution! I admit that at first I agreed with this outcry from the Exchange community, but the more I thought about it, the more I realized there are good reasons to keep items inside Exchange.

First, you must put yourself in the shoes of a regular user. He or she often has a mailbox quota enforced and when the user gets a warning message they typically move items away from Exchange and store them in a PST file. This seemingly innocent move causes a very big problem. PST files, as their name implies, stands for Personal Store and should be stored locally on end users’ hard drives. The result? There is no longer any backup on those mail items, and even if you go the unsupported route and save the PST file on a file share somewhere, the backup software often has issues with doing backup of these files since Outlook has them open. What about using special tricks, such as open file agents, you might ask? Unfortunately, the backup software will still experience difficulty in performing the backup of open files even with such tricks. Outlook also changes the archive bit on the PST file, which in turn triggers the backup software to perform a backup even if there is no change in the file from Outlook. This will cause the backup to run for an extensive length of time since there are typically many PST files scattered across numerous file shares.

Another roadblock administrators may face when storing PST files “over the network” is that networks are unreliable and do not always function properly. Even if the networks are working, users are prone to closing the lid on their laptops, causing the network link to close and the PST file to corrupt since it was not properly closed by Outlook. This is also the main reason why PST files are not supported on file shares. The corrupt PST file is also notorious for engendering end users to call the help desk, and essentially forcing the administrator to initiate a restore of the hopefully backed up PST file. Other problems exist with PST files located on a share, including but not limited to: slow network performance when open, and when closing Outlook.

The risk of taking data out of Exchange and storing it inside PST files is that you are moving corporate data from a safe environment located inside Exchange databases to an unsafe environment. Since PST files can easily be corrupted and/or lost, they are not a secure alternative to storing business-critical data. By moving corporate data out of Exchange you may in essence be breaking laws regarding retention and compliance because the administrator no longer has control of email content. Let us not forget that corporate assets are in danger of being lost by moving data out of Exchange.

Other issues to consider include legal discovery and reducing the burden of searching and restoring mail data. When moving data from Exchange to PST files, you have the potential of losing all those things.

Other archiving solutions often solve all or many of the aforementioned problems by using the “stub” approach, and can provide some kind of search capability.

The stub approach is something that most vendors claim to be a viable alternative, but keep in mind this also introduces problems since items are removed from Exchange and are no longer indexed and searchable from a native client, forcing you to utilize the vendor client. That process entails installing and maintaining another client, which can be complex both for the end user and for the administrator. Most vendors also claim that you would reduce the amount of data in Exchange with the stub approach. That is often true, but in many cases you do not reduce the data as much as you expect since the stub is another item in the Exchange database with a couple of Kb in size. By replacing a 10Kb mail with a stub of 5 Kb, you only save 5Kb of data. This is something that you should consider if you import PST files to Exchange and then archive those imported items-- this will in fact increase mailbox size by a couple of Kb per item you import even if you later archive it.

Microsoft’s approach to archiving in Exchange 2010 is not to move items from Exchange and store it somewhere else, but in fact to leave it inside Exchange. There are several reasons for this. By leaving data inside Exchange, both users and administrators no longer need to learn and manage another system. The end user experience is the same as having a PST file connected to Outlook, and users can still drag and drop mail back and forth between their mailbox and the archive, making it incredibly simple for users to take their PST files and import them to the archive. Administrators would be happy since they no longer must cope with all the problems caused by PST files, but users will also be happy since the archive is indexed and searchable. Some companies must also comply with regulations and policies that force them to do searches across multiple mailboxes. This is also built in and is performed from the Exchange Control Panel (ECP) by users that have the delegated permission to do so. You can also turn a mailbox on ‘Litigation hold,’ meaning that even if a user deletes items, empties their ‘Deleted Items’ folder, and clears the dumpster area, mail is still maintained in the new Exchange 2010 dumpster version 2 area. This area is not reachable by end users but only by members of the ‘Discovery Management’ role group.

But what about the increased database size in Exchange?
The archive is technically an additional mailbox in the same database as the primary mailbox and shows up in Outlook in the same way a PST file would. People often react and say that the archive must not be in the same database because it should be on cheaper storage-- that’s fair and is most viable if we are talking about Exchange 2003. It’s common knowledge that the IO load that Exchange 2007 places on your storage has dropped by approximately 70% as compared to Exchange 2003, and with all the enhancements done for Exchange 2010, the IO footprint has dropped about the same amount again, making it possible to run all your Exchange databases on cheap and less performing disks. This means that you don’t need the costly SAN for your Exchange databases but in fact can use cheaper storage like SATA disks.

What about backup and restore time?
With the increased volume in Exchange databases, you may think that the backup time will increase, but that is not entirely true. The streaming backup API is taken away from Exchange 2010 and what is left is the VSS API for backup. With VSS, you only do backup of the changed blocks on disk and most of the mail in the archive is just sitting there, and therefore the block on disk is never changed. Sadly, the story for doing restore is not improved, and with the increased volume you also get increased restore time. But there is a simple solution for that-- don’t do backup. This is a very controversial thing to say, but with Exchange 2010 Database Availability Group (DAG), you can replicate data to several mailbox servers (up to 16) and if a database or disk blows up, another copy of your databases will be set as primary, and most likely will not even be noticeable to the end user since all client connections don’t go direct to mailbox servers but instead go to Client Access Servers (CAS). You can also stretch members in the DAG across datacenters to solve the case where a datacenter stops working. It also provides a replica of your data offsite. So the question to ask is, Why do you backup your Exchange data?

Why bother creating an archive mailbox at all?
The reason for creating an archive mailbox is probably something you base on the size of each user’s data volume. It is only the primary mailbox that is synced to the cached Outlook, and Outlook has issues with performance if the OST file is growing large. Therefore, by moving data to the archive the OST file will be smaller and Outlook will in turn perform better in cached mode. To see the archive, you must be online and have contact with Exchange and be using Outlook 2010 or OWA 2010, hence the name ‘Online Archive.’

Must users manage their archive manually?
No, the administrator can create policies that either move mail from mailbox to archive or delete mail. This is similar to what was first introduced in Exchange 2003 as recipient policies that were often used for clearing things out of the mailbox to the ‘Deleted Items’ folder or delete items completely. In Exchange 2007 this feature was enhanced a bit and changed its name to Message Record Management (MRM), and displays as Managed Folders in EMS and EMC. Exchange 2007 also introduced ‘Organizational Folders’ that could hold certain policies regarding how long mail must be maintained and what to do when they expire.
The problem with MRM version 1 was that it could only be applied to a folder or type of message, not one individual mail. With Exchange 2010 the administrator can still use the old way of applying policies on folders, but there are also new policies that allow users to apply a policy directly to individual mail items, (MRM version 2 if you will.) Policies are created by the administrator and if applied to folders, then users have the option to apply policies on individual mail items depending on how the policies are created.

The administrator can set different quotas on the primary mailbox and the archive. An example would be that the primary mailbox quota is 2GB and the archive is 15GB. With a couple of policies the administrator can choose to delete everything older than 10 years, and messages older than 1 year are moved to the archive. There also exist a couple of user policies that a user can set, allowing them to: ‘Keep this message for 5 years,’ ‘Keep this message for 1 year,’ ‘Delete this message in a month’ or ‘Delete this message in 5 months,’ giving a very flexible and efficient way of managing messages in Outlook.

Microsoft definitely has a good thing going with their Exchange 2010 archiving solution. For those of you not swayed yet, keep in mind that this is the first version of archiving within Exchange. The archive makes it possible to get rid of PST files and along with them all the problems they cause. Any administrator would agree that having data safely inside the Exchange store, managed and searchable with Exchange native tools, instead of having extra software and hardware to maintain is worth disregarding any rumors or misconceptions surrounding this brand new feature.

Friday, October 9, 2009

Exchange 2010 is code complete

As you may have heard, the long wait for Exchange 2010 is over since it is code complete.

While we are waiting for a download link or a DVD to run setup from, here is some more info Exchange Team Blog

Monday, September 14, 2009

Forefront for Exchange and Windows built-in Firewall on a CCR cluster

Are you running Exchange 2007 in a CCR configuration with Windows firewall tuned on?

Then you have probably encounter the problem “ERROR: cannot connect to service” when starting Forefront for Exchange administrator.

The solution is to allow some traffic through the Windows firewall as stated in KB929073.
This will allow you to start Forefront admin tool on the node running the Exchange Clustered Mailbox Server (CMS).
But you will still get an error when launching Forefront admin on the passive node and connect over the network to CMS.

The solution is to create two firewall rules that allows the traffic. These can be created with the GUI but its easier to describe

netsh advfirewall firewall add rule name="Forefront for Exchange Controller Service" dir=in action=allow program="C:\Program Files (x86)\Microsoft Forefront Security\Exchange Server\FSCController.exe" description="Allow connection to Forefront for Exchange controller service" enable=yes profile=any localport=RPC protocol=TCP security=notrequired

and the second rule

netsh advfirewall firewall add rule name="Forefront for Exchange Admin tool" dir=in action=allow program="C:\Program Files (x86)\Microsoft Forefront Security\Exchange Server\FSSAClient.exe" description="Allow connection to Forefront for Exchange admin tool" enable=yes profile=any localport=RPC protocol=TCP security=notrequired

You need to do this on both nodes and also restart the Forefront Controller service, but this will also restart several other services.

You have to change the path in the commands if you have installed Forefront in a different location than default.

You can also narrow down from where connections can be executed with the remoteIP parameter and the network classification with profile parameter.

netsh advfirewall firewall add rule name="Forefront for Exchange Admin tool" dir=in action=allow program="C:\Program Files (x86)\Microsoft Forefront Security\Exchange Server\FSSAClient.exe" description="Allow connection to Forefront for Exchange admin tool" enable=yes profile=domain localport=RPC protocol=TCP security=notrequired remoteip=localsubnet


netsh advfirewall firewall add rule name="Forefront for Exchange Admin tool" dir=in action=allow program="C:\Program Files (x86)\Microsoft Forefront Security\Exchange Server\FSSAClient.exe" description="Allow connection to Forefront for Exchange admin tool" enable=yes profile=domain localport=RPC protocol=TCP security=notrequired remoteip=

Another thing that is important when running Forefront for Exchange in a CCR environment is to have the checkbox ‘Redistribution server” on “General Options” checked, otherwise the passive node will not be able to get updates from the active node.

Sunday, September 13, 2009

Exchange 2007 Service Pack 2 backup feature

It has been a lot of disappointment of the missing feature of a native backup solution in Exchange 2007 when running on Windows Server 2008. It has been so much roar out there that Microsoft decided to do something about it. With Exchange 2007 Service Pack 2 the native backup of Exchange capability is back.

It will not be a separate software or anything that is visible in a GUI somewhere, it will just be an added feature of Windows Server Backup since Windows Server 2008 don’t use NTBackup anymore.

The Windows Server Backup is aimed towards small or even medium size organizations and also as test and troubleshooting tool, so don’t expect to much of it.

How to do a backup of Exchange with Windows Server backup?
First you must install Windows Server Backup. This can be done from servermanager, add feature and select Windows Server Backup feature or from command prompt with “ServerManagerCMD.exe -i Backup Backup-Tools”. There is no reboot required and after the installation.
Start Windows Server Backup on your Exchange server. You must do the backup locally, there is no over the network backup capability. A shortcut is created during installation in Administrative Tools or you can run “wbadmin.msc”

After Windows Server Backup has started you can select to create a scheduled backup schema or simply do a single backup. Click the “Backup Once…” link in the Actions pane, The Backup Once Wizard starts. Select “Different Options” and then click “Next >”, Select either “Full Server” or “Custom”, if Custom is selected you are prompted with options of which disk to backup.
There is no option of only doing backup of Exchange databases or only selecting some folders. The granularity is the complete disk and nothing smaller.
Select the disks that you know contain Exchange files. You will not see that Exchange databases and transaction log files are selected for backup, you must know where your Exchange files are located. Click “Next >”.
Next option is to select the destination for the backup. If you select Local drives you must select a destination drive that are not included in the backup. The other option is a Remote shared folder, then you have to enter a shared folder on another server on your network. Backup will be created in a subfolder called WindowsImageBackup with the Access Control of either Inherit or Do not inherit (equals, you specify an account that are granted permission to the backup files).
If the destination location already contain a backup it will be overwritten.
Next selection is to either do a VSS copy backup or a VSS full backup, there is no option of the old traditional streaming or incremental backup.
Copy backup will simply copy the selected disk or disks to the destination, the full backup will do the same but also purge the application log files . This is the Exchange transaction log files and this is the option you should select if you’re not using any other backup software on your Exchange database files.

Backup will be placed in a folder called WindowsImageBackup and consist of a couple of XML files and the selected disks stored as VHD files. Virtual Hard Disc files are pretty cool, it is the native file format for hard disc with Microsoft virtualization technologies and you can copy or transport those files around and mount them on other computers for examination or whatever reason you might have.

Doing recovery of Exchange databases with Windows Server Backup.
When doing a recovery the database in question must be dismounted first. In a real world scenario it is already since you probably lost a hard drive or you did a disaster recovery installation of the server and its now time for recover the databases. Databases will be dismounted if needed by the Windows Server Backup.

Start the recover wizard by clicking “Recover …” in the actions pane in Windows Server Backup.
A recovery of Exchange database and transaction log files can either go to the original location and replace the original files or be destined to the Recovery storage Group (RSG) or even to a different server.
For this exercise select the local server and then select a time when you did the backup you want to recover, next select applications since it is the Exchange application data we want to recover and then select Exchange. If you click on the “View Details” button you will see databases that will be recovered, you cannot select individual databases to be recovered, all databases will be recovered. To have a more granular option; you must create multiple backups with different databases in each backup and for this to work each database and transaction log file must be on separate disk since the smallest option of granularity is the complete disk. The checkbox “Do not perform a roll-forward recovery of the application databases” means that Exchange will not roll forward transaction log files into the the recovered database. Next option is to recover to the original location or another location. Select original location. Recover to a different server or different location will be explained in another article.
Click recover on the Confirmation page if you is satisfied with you selections and to start the recover of you Exchange databases.
If you did not have dismounted the databases before you started the recovery they will be dismounted by Windows Server Backup. Permission to dismount databases is required for this to work as expected.
If everything goes well, database and transaction log files will be read from backup and written back to disk and finally transaction log files will be replayed into the database

Does it work? The answer is, it depends. It will work if being used for doing backups and doing restore in a disaster scenario. It will not behave as expected when doing restore to a different server or different location, such as the Recovery Storage Group.

Windows Server Backup with Exchange plug-in don’t have granularity when doing backup or when doing a recovery, it will recover whatever there is on the backup. It’s all or nothing type of backup and restore.

Windows Server Backup can be configured to do scheduled backup from the GUI, but you can also use Windows Server Backup from the command line. This gives you more flexibility to make your own schedules with different options. I am thinking destinations here since each destination can only hold one backup, so if doing multiple backups to the same destination, only the last will be preserved.

Command line reference for Windows Server Backup.

How to use the command line and to do a restore to the Recovery Storage Group is a story for another day.

Tuesday, August 25, 2009

Exchange 2007 Service Pack 2 is released

Finally the Exchange 2007 Service Pack 2 has left the building and are downloadable by the public.

Of course it contains all bug fixes included in previous rollup’s for Exchange 2007 but it also contains some new features such as a plug-in for Windows Server Backup to natively do Exchange backup without buying any 3:rd party software. This one has been a long standing request that now is fulfilled.

Some other noticeable features included in SP2
* Enhanced Auditing
* Public Folder Quota
* Configure Diagnostic logging via GUI/Exchange Management Console
* Named properties bloat will stop since SP2 don't propagate x-headers to MAPI properties anymore. This is the same behavior as in Exchange 2010. See earlier posting about Named properties bloat part 1 and Named properties bloat part 2
* Dynamic AD schema update

Bu before even installing SP2, you must first extend Active Directory schema. Prepare your AD team for this!!

And if you’re planning on upgrading to Exchange 2010, SP2 for Exchange 2007 is a prerequisite and must be installed on every Exchange 2007 server before Exchange 2010 can be introduced into an Exchange 2007 organization. Read my other article about transition to Exchange 2010. Thinking about Exchange 2010? Understand the Prerequisites

Some more information about Exchange 2007 SP2

Download link for Exchange server 2007 SP2. Be to careful to read the “Release notes” and “what’s new in Exchange 2007 SP2” documents on the download page before installing.

Tuesday, August 18, 2009

Exchange 2010 is starting to look good

Now that the RC (Release Candidate) is out in public Exchange 2010 Release Candidate info page I can only say that it’s starting to look really good. Features that before where only talked about now is in this build and also work pretty good.

From now on to RTM version, it is mostly bug checking and performance tuning.

For you that seek for the 32bit version of Exchange 2010, the answer is, there will not be any 32bit version of any kind, no demo version, no admin tools no nothing.

Sign up for download here

Here is the Exchange team info msexchangeteam blog

Friday, July 31, 2009

Some thoughts about applications sending SMTP messages to Exchange

Most organizations have applications that need to send mail, either to internal recipients or to have a SMTP server to relay there mail destined for external recipients.

Common configuration is that they figure out the server IP address where the application runs on and then enter the IP on the allow relay list on Exchange 2003 virtual server, if then run Exchange 2007 they often are a little bit puzzled until someone shows them MSExchangeteam 'Allowing application servers to relay off Exchange Server 2007'.

Why do I think this is bad?
First, most applications don't need the relay permission, admins often think so but the truth is that they don't need it.
Servers are not static, IP changes, name changes both on the application side and the Exchange side. If the server get infected with something bad, it is not just the application running on the server that are allowed to relay but everything running on the server since its the IP that are allowed to relay.
Remember this: IP restrictions are not authentication.
Have seen applications that are hardcoded to connect to a specific IP or names meaning that either IP or name can be changed on the Exchange server.
Resolution is of course to have applications easy to reconfigure with destination SMTP server’s name.

Another problem is when there is some kind of anti spam software running on Exchange. This will often make the applications mail end up being classified as spam and make Exchange admins trying to configure the anti spam software to white list some mail. Sometimes this cant be done and sometimes it can making admin workload bigger than before.
Resolution here is to educate developers in SMTP. They often doa good job of building applications but are very often bad at SMTP. They find a free SMTP engine on Internet and they stick it in there applications and in the end they manage to send a SMTP mail but it is often bad formatted in various way making the anti spam software react and classify it as spam.
Resolution here is of course knowledge about SMTP.

Back to the relaying part of sending mail. One very good solution here is to have the submitting application to authenticate SMTP session. By sending authenticated SMTP mail to Exchange, it will get the permission to relay, it will most likely bypass antispam software depending of software of course. It will also make the application easier to move to another server without reconfigure Exchange. Another thing with authentication and SMTP is that if I authenticate as ‘application 1’ I am only allowed to use ‘application 1’ email address, I cannot use another SMTP sender address..

My recommendations to developers, building applications sending SMTP mail.
* Use a good SMTP engine that do work. Have encountered one that didn’t like the tarpit time you can configure in Exchange 2003 and are default activated on Exchange 2007. This engine simply could not work with tarpit.
* Use authentication when submitting mail. NTLM is of course better than Basic, but if using basic authentication, use it over TLS.
* Ability to easy change SMTP configurations such as server name, sender and receiver SMTP address, TCP port etc.
* Have redundant SMTP server configuration. The SMTP server that you’re using may not be up and running. If mail are critical, consider having some queue functionality in the application that can try to resend mail. One queue functionality would be to use the local windows SMTP service, but this will only work if application run on Windows boxes and the local SMTP service is working.
* Use only valid sender and destination SMTP addresses. If there is NDRs, they should go back to an existing mailbox that someone can monitor and act upon.

There are of course recommendations to Administrators as well.
* Clearly communicate to developers what the rules are for submitting SMTP mail to Exchange. No hardcoded configuration, no anonymous submission etc.
* Add a good name for your SMTP servers, such as ‘smtp.ADdomainname’ for developers to use instead of giving them the real servername or IP. With a standardized name across all applications you can make them use another server when there is need. If you internal Active Directory name is company.local your smtp server name would be
* Set up internal MX records for the AD name space. Same advantage as above.
* If you have multiple HUB server, Load balance TCP port 587 across those servers and make applications use SMTP submission port 587 (this is the client receive connector that are default created on Exchange 2007)  instead of the default 25. Don’t load balance port 25 since it will break functionality in Exchange such as authentication.
* Be very careful of what mail you let through to internet, maybe you should block applications to send to Internet on connectors to maintain your good name on Internet. Companies have ended up on various blacklists because developers have built bad SMTP mail or have a buggy application that spray mail across Internet.

There are probably many more options/alternative/thoughts around this, but these are just some that regularly pops up.

Wednesday, July 29, 2009

OCS Remote Connectivity Analyzer

Some of you might know of the Exchange Remote Connectivity Analyzer that I wrote about a year ago 'Test Exchange Connectivity website'

Now there is new Remote Connectivity Analyzer, this time for OCS (Office Communicator Server). This is work in progress so don't count on full functionality yet but with your feedback the tool will improve.

The URL is here

Thursday, July 16, 2009

Update Rollup 9 for Exchange Server 2007 Service Pack 1 (KB 970162) is released

Rollup 9 for Exchange Server 2007 SP1 is released.

Read about all the included fixes and download from here KB 970162

Some noticeable fixes are: Note that this in not the complete list but just some of the fixes. For a full list see the link above.

947662 ( ) The transport rule "when the Subject field or the body of the message contains text patterns" does not work accurately on an Exchange Server 2007 Service Pack 1-based computer

957137 ( ) The reseed process is unsuccessful on the CCR passive node after you restore one full backup and two or more differential backups to the CCR active node in Exchange Server 2007 Service Pack 1

959559 ( ) Transaction log files grow unexpectedly in an Exchange Server 2007 Service Pack 1 mailbox server on a computer that is running Windows Server 2008

961124 ( ) Some messages are stuck in the Outbox folder or the Drafts folder on a computer that is running Exchange Server 2007 Service Pack 1

968205 ( ) The Microsoft Exchange Information Store service crashes every time that a specific database is mounted on a computer that is running Exchange Server 2007 Service Pack 1

968621 ( ) The Microsoft Exchange Information Store service crashes when you use a Data Protection Manager (DPM) 2007 server to perform a snapshot backup for an Exchange Server 2007 Service Pack 1 server

970086 ( ) Exchange Server 2007 Service Pack 1 crashes when the Extensible Storage Engine (ESE) version store is out of memory on a computer that is running Windows Server 2008

Wednesday, July 15, 2009

Update for Forefront Security for Exchange 2007

If you haven’t noticed yet, you can now download “Microsoft Forefront Security for Exchange Server with Service Pack 2”

You can download an evaluation version here

You should read the release notes as there is information about the installation or upgrade if you already have an earlier version installed.
Release notes is here

Thursday, June 18, 2009

Named properties bloat part 2

A while back I wrote about named properties and how they could make your databases run out of rows i a table and crash.

Rollup 8 for Exchange 2007 service pack 1 changes the behavior a bit and does not propagate x-headers for unauthenticated mail and the upcoming service pack 2 will not even propagate x-headers for authenticated mail. The same goes for Exchange 2010.

I personally would like an on/off switch for this behavior in order for people to configure this as they please. As stated on the msexchangeteam blog, there could be applications that rely on these properties.

Thursday, June 11, 2009

Thinking about Exchange 2010? Understand the Prerequisites

Even though Exchange 2010 is not a finished product yet, it seems kind of strange discussing a transition to it at this stage, but there are some things that you should know if you are planning on upgrading your current Exchange environment to Exchange 2010 in the future.

Active Directory Prerequisites:

First there are requirements on Active Directory. All Domain Controllers must be running at least Windows Server 2003 SP2 in sites where you want to deploy Exchange 2010 servers. In addition, your Forest must be at a Windows 2003 functional level.

You can have Domain Controllers running Windows Server 2008 and even RODC, but Exchange will not use them. Domain Controllers should preferably run 64 bit version of Windows to run smoother and better handle the load from Exchange and other clients.

Exchange Prerequisites:

Your current messaging infrastructure cannot have any earlier release then Exchange 2003. If you do, you must first upgrade/transition to at least Exchange 2003.Exchange 2003 must be least Service Pack 2 and Exchange 2007 servers must also be running Service Pack 2 for Exchange.

So you can meet all the prerequisites by upgrading your Exchange servers to 2003 Service Pack 2 and then upgrading all your Global Catalog servers to Windows Server 2003 Service Pack 2.

Other notable items:

Exchange 2010 only runs on a 64 bit architecture. In fact, unlike Exchange 2007, there is not even a 32 bit demo or lab version.

In addition, Exchange 2010 will only work on Windows Server 2008 or Windows Server 2008 R2.

You must also apply schema updates to the Active Directory.

How to make the transition:

The transition begins with building the Exchange 2010 environment in parallel with your current Exchange environment, starting with your sites that are facing Internet. The reason for starting at the internet facing site is that you must start by, not replacing, but standing up a new CAS server, similar to what you did when introducing Exchange 2007. The big difference here is that Exchange 2007 CAS servers can proxy requests going to Exchange 2000 and 2003 backend servers but Exchange 2010 CAS servers cannot and will only send a redirect back to the client to the old Exchange 2003 Front End or Exchange 2007 CAS depending on where the mailbox is located. This means that you cannot replace your current Front End or CAS server, you must live with both the old and new system together as long as you have mailboxes located on old servers. Another thing you must do is to copy your current certificate to your new Exchange 2010 CAS server and get a new one and place it on the old Front End or CAS server. You will then have, for example, a certificate with the name of “” on Exchange 2010 CAS and “” on the old Front End or CAS server.

When users connects to Exchange 2010 CAS and have authenticated, Exchange will know where the mailbox is located and if it is on the legacy Exchange it will send a redirect to the client to connect to the legacy URL that you configure. If the mailbox is located on Exchange 2010 everything is good and no redirection take place.

To make your life simpler, you should consider consolidating your namespace to only one name otherwise the transition will be more troublesome with more URL and certificates to deploy. Another important consideration to keep in mind is that you will need an extra Internet IP addresses during the transition.

Unified Messaging servers behave the same as CAS servers do, they do a redirect to the old UM server. So make sure that you send the initial SIP communication to Exchange 2010 UM server, and it will redirect if needed. This is true if you don’t have OCS connected to UM, in this case you need to create a new dial plan and assign it to the Exchange 2010 UM server.

An Exchange 2010 HUB server will not talk to an Exchange 2007 mailbox server but will be able to send mail to Exchange 2007 HUB that in turn can communicate with the old mailbox server. To make this work you also need extra HUB server, since the old ones must be around as long as you have legacy mailboxes.

Your existing mailbox servers obviously will have to remain in parallel, since both sets of servers have to be running, to move mailboxes in between them. Legacy mailbox servers can be uninstalled when they don’t have any mailboxes located on them. One of the cool new features in Exchange 2010 is the Online Move Mailbox. This allows administrators to move a mailbox without the user being disconnected during the process until the last minute when all mail have been replicated and Active Directory replication takes place. Online Move Mailbox is possible between Exchange 2010 server and from Exchange 2007 to 2010.

The only server you can replace is the Edge role and this can happen anytime during the transition, as long as you subscribe it to an Exchange 2010 HUB server.Be aware that all of these steps don’t work in the current public beta, but they will work when Exchange 2010 goes RTM sometime later this year. Also please remember as tempting as it might be, you should not put in the current beta in your production environment.

Saturday, May 30, 2009

Start index Office 2007 files in Exchange 2007

Do you want your Exchange server content index to understand Office 2007 file format?

Default installation of Exchange 2007 don't understand how to index office 2007 file format such as .docx files.

This can be discovered by doing a search from outlook in online mode or from OWA for words that you know is in the .docs file. If the search don’t find anything you must enable Exchange to understand office 2007 file format.

This is not difficult. start by download and install the Office 2007 filterpack. Installation is safe and do not stop any services or trigger a reboot.
After installation you must register the new iFilters in order for the Exchange content index engine to use them. This is done by editing the registry so be careful, How to register Filter Pack IFilters with Exchange Server 2007

When you have followed the article and imported the .reg file and restarted services you can try to send a e-mail with a .docx file attached and then search for some words that you know is in the word file. If everything is OK your search should find your word file.

But if your server already contain office 2007 files?
Then you can rebuild your search index on Exchange. This is easy to do. Stop some services, delete the index files and then start services again. To make this easier there is a powershell script that does exactly that. The default location is in the “C:\Program Files\Microsoft\Exchange Server\Scripts” folder and ahs the name “ResetSearchIndex.ps1

Script takes several parameters, but if you want to rebuild index for every database you can run “.\ResetSearchIndex.ps1 -force –all”.

A word of caution when you rebuild your indexes. This will cause your server to run as fast as it’s resources allow it to and this will for sure make the disks spin like crazy and CPU utilization go up a lot. This will make your user experience bad. So the suggestion is to rebuild the indexes outside business hours.

happy searching !

Wednesday, May 20, 2009

Exchange Server 2003 support shift

For you that still use Exchange 2003, Heads up!!

Exchange 2003 is now moved from Mainstream to Extended support on April 14, 2009.

What does this mean? Exchange 2003 is still supported for another 5 years.
Difference between Mainstream and Extended support can be found here

  • Non-security hotfix support
  • No-charge incident support
  • Warranty claims
  • Design changes and feature requests

As you can see there is not that much difference, but this is clear signal from Microsoft that Exchange 2003 life is coming to an end. This might be good time to start planning the upgrade to Exchange 2007 or Exchange 2010.

One way of still having full support is to purchase Extended Hotfix support.

Important to say is that you still have support when running Exchange 2003 as long as you pay for your incidents.

Tuesday, May 19, 2009

Why do you backup Exchange databases?

The biggest challenge with Exchange backups facing administrators today is the time factor. This factor comprises both the time necessary to perform the backup and the time necessary to complete the restore. Users want to have more data in their mailboxes; but in order to adhere to the SLA commitment, quotas must be set on users’ mailboxes. The end result? Users get frustrated from having to constantly manage their mailboxes in order to maintain a pre-set quota limit.

This situation is most noticeable if you use the old traditional streaming backup API that has been in Exchange since the beginning and is still there today. Since there has not been any active development, however, the recommended backup method is to leverage VSS (Windows Volume Shadow Copy Service).

With streaming backup API, you transfer every bit and byte of Exchange database to the backup software while performing a full backup. VSS operates a little differently. Instead of streaming the database file, it reads changed blocks from disk and then hands it over to the backup software. This procedure saves a lot of time when doing differential or incremental backups. Depending on how you configure the backup software to handle VSS, it can perform tasks such as incremental backups while keeping track of which blocks belong where. From that, an administrator can build up a complete backup in case he or she needs to do a restore. The benefit is that small backups are always done--only the restore is large and takes a long time.

When doing backups with streaming API, it automatically performs a consistency check by reading every byte of the database file. With VSS, however, you must tell the backup software to do the consistency check, otherwise you risk backing up a bad database file.

An option is to allow your backup software to integrate with Exchange and permit the storage to leverage functionality found on the storage to handle copies of LUN’s. In this case, the backup will only take a few seconds as far as the Exchange server sees it. You should still complete the time-consuming task of transferring the files located on the backup LUN to tape or other disk. This solution is dependant upon whether or not your storage is configured with this functionality and whether or not your backup software can do the integration.

No matter what backup method you use, all the backup data must be stored somewhere.

What if there was a way to have all your Exchange data spread out to multiple copies that automatically were kept in sync? With Exchange 2007 CCR together with SCR, you could make this happen. CCR is built upon a Window’s failover cluster, and data is replicated between the two nodes automatically. This produces two copies of Exchange data, and by introducing an SCR you can also introduce a third copy.

What happens if you need more? Simply add additional SCR copies. SCR are techniques that do not replay the copied transaction log files at once; they have a “lag” before doing the replay into the Exchange database. Lag time can be configured so that you can run a CCR with instant replaying of transaction log files to have two copies of Exchange data always updated. Then, you can have one SCR copy lagging behind a couple of hours and additional SCR copies lagging behind a couple of days.

A common scenario is a user who has accidently deleted something from his or her mailbox. This delete instruction has replicated to the two CCR nodes, but the delete instruction may not have been replayed to one of your SCR copies. In this case, you can still recover the deleted mail from your SCR copy by stopping replication and then replaying the transaction log to the SCR database. Finally, mount the database and retrieve the data.

Another cumbersome task is managing the huge amount of data in Exchange. This data causes a lot of IO to the disk subsystem where Exchange has its databases and transaction log files. This is another reason to keep the databases small.

With Exchange 2007 the IO drop by 70% so you can allow the databases to grow without spending a fortune on the storage system.

So what about the idea that you run Exchange 2007 in a CCR and SCR configuration to be sure that you have multiple copies of Exchange data around? You allow users to have large mailboxes and perhaps you switch your storage solution from an expensive SAN, both money-wise and technology-wise (i.e. specially- trained personnel, fiber channel technology etc.), to a cheaper solution like DAS or iSCSI.

The benefits of this switch would include cheaper storage, meaning that users will no longer have to delete items from their mailboxes and move them into a PST file. With large mailboxes, the need for a restore is minimized and, therefore, also the need for doing backups.

But there still can be a need for doing backup and restore if your databases and disk fails. Should this happen, you have multiple copies of Exchange data already. Plus, it is a very simple process to make the other node of the cluster start acting as your Exchange server with a healthy set of data--this is in most cases done automatically. You can also manually activate the SCR copy with its own set of data.

All this information pertains to Exchange 2007, but Exchange 2010 will solve this problem with its new cluster technique, Database Availability Group or DAG for short.

So the question is why do you do backup of your Exchange data? Some may say that they need to store a copy of all data offsite. Although this is a good and valid reason, it will also be solved by having the SCR copy located somewhere else. It could even be on the other side of the world!
I am not saying that running without backup will suite everyone, but chances are you will discover that backups are not as necessary as you initially thought.

Tuesday, May 12, 2009

Upcoming Exchange 2007 SP2

What’s the deal you may ask, well there is a lot of new cool features in it except from the usual bugfixes.

One most requested feature is the ability to make backup of Exchange with built in tools in windows. With Service Pack 2 you get this with the Exchange Volume snapshot backup snap-in for Windows Server backup.

This is only one feature of many that are added in SP2. Read more at the You had me at EHLO blog site

The Only sad thing is that you have to wait until after the summer to get it.

Tuesday, April 28, 2009

Windows Server 2008 Service Pack 2 and Windows Vista Service Pack 2

Service Pack 2 for Windows Server 2008 and Vista is now released.
Take a look here for more information

The good thing is that the Exchange 2010 Beta can be installed on a Windows Server 2008 SP2.

Happy patching

Monday, April 27, 2009

Exchange 2007 books

For you people out there that is still interested in learning Exchange 2007 or studying for one of the Exchange 2007 exams.
Technical Specialist,
IT Proffesional, and

I recently read two books, “MCITP Self-Paced Training Kit (Exam 70-237): Designing Messaging Solutions with Microsoft Exchange Server 2007” ( and “MCITP Self-Paced Training Kit (Exam 70-238): Deploying Messaging Solutions with Microsoft® Exchange Server 2007” ( that have a lot of information about Exchange 2007.
They are written as a study guide towards the exams, so they begin with instructions how to build your own lab environment followed by lessons and now and then also practices. I like this format since it gives the reader some theoretical knowledge and then they practice on it, this also needed if you want to really learn and pass the exams. I recommend that you take your time and use your lab do the practice tests thorogfully.
Both books have a lot of words and content, and if you really read and give it some thought you will learn a lot, but if you only read it fast from cover to cover you will probably not pick up that much new knowledge.

The overall impression is that those books are well written with an easy language to understand, they also have a lot of content to read through but on the other hand, Exchange is a big product with a lot of features.

Keep on reading.

Friday, April 17, 2009

More Exchange 2010 resources

This link provides a lot of information about Exchange Server 2010

You will most likely bang your head against your keyboard a couple of times when playing around and try to do things.
Exchage 2010 forums is a good place to get help and provide help to peers

Wednesday, April 15, 2009

Exchange 14, sorry. Exchange Server 2010

For you that have been waiting on the next version of Exchange Server, you can stop waiting and instead read and download it from
And yes the name will be Exchange Server 2010.
There is so much new cool and good stuff in there so start reading the help file and follow the link above for more info.

Direct download link is

Sunday, April 5, 2009

Exchange monitoring tools

Ever wanted to know what users doing to your Exchange server? If you're a serious admin you should want to know.
Most common is to use performance monitor to monitor all kinds of statistics and how much pressure there is on your Exchange server. You can for sure see how many users and how much load they generate on your server, this is all good but you cannot see what individual users are doing to your server which is really nice in case you need to troubleshoot for example performance issues.
Luckily there exists tools out there that monitoring MAPI clients in real time (well almost anyway) and the best thing is that they are free of charge.

Microsoft has one tool called "Microsoft Exchange Server User Monitor" or ExMon for short.
It has been around for a couple of years. It can monitor Exchange 2000 and Exchange 2003 and in the latest version even Exchange 2007.
Read more about the tool on Technet and the download link is

Unfortunate there seems to be something wrong with the download and you will not get the latest version that works with Exchange 2007, but I can only guess that this little glitch will be corrected shortly.

Recently I came across an almost identical tool "ExInsight for Microsoft Exchange".

You can read more about it and download it from

Both tools has there advanteges so there is no number one, they simply have different featue sets.

Friday, April 3, 2009

AOL and OCS 2007 R2 edge problem

Some people have problem to get the federation with AOL going when they use OCS 2007 R2. This problem is now resolved. Check out this article on how to configure your x64 windows to make the federation work.

Wednesday, March 18, 2009

Rollup 7 for Exchange Server 2007 Service Pack 1 is on its way

In a few days Microsoft will release Rollup 7 for Exchange 2007 SP1.

See for more info about this rollup.

Wednesday, February 25, 2009

Big outlook patches

Maybe you have heard about all the great stuff for Outlook that is coming in Office 2007 Service Pack 2.
The good thing that you don't have to wait for the service pack to leverage all the goodies, there is two hotfixes that do great things for your outlook.

First there is the big one

and the smaller one

and some info what they contain for goodies

Read those KB's and you will not want to live without these fixes.

You really need these patches, they contain so much good things that you simply cannot do without them. Number 1 is the big improvement for big OST and PST files. The performance gain is really outstanding if you have a couple of GB in your files.
Number 2 is the changed behavior when outlook closes that gives you fewer corrupt files and faster shutdown of outlook.
Number 3 is all the good calendar bugfixes.

Thursday, February 19, 2009

Search for patches

Have you ever wanted to easy see all patches for Exchange?
Well there is a really good way of finding out what patches exists.

Fire up your browser and enter this URL
This searches for Exchange Server 2007 updates.

Here is the URL for Exchange Server 2003

and for OCS 2007

A good thing with this tool is that you can select multiple patches and then download them all at the same time, saved in a nice folder tree structure.

Saturday, February 7, 2009

Update Rollup 6 for Exchange Server 2007 SP1 to be released in a few days.

In a fewe days the Rollup 6 for Exchange 2007 Service Pack 1 will be released. It contains security fixes that is classified as severity, so keep your eyes open for it and start patching your Exchange servers when it arrives.

Friday, January 30, 2009

Live Meeting Recording Converter

Live Meeting client has a really nice feature that allow the meeting to be recorded for viewing later either by people that could not attend or by people that want to review the meeting again.

The most common request I hear is that people want to save the recording in a more available format for easier offline viewing or distribution. Well, Microsoft listened and created a tool that converts the LM recording to a WMV file.

Simply download and install it.

When you have used the normal LM Recording Manager and processed the recording start the LM Recording Converter tool and start a conversion from LM to VMW

The tool is not perfect but it does its job fairly well and should be suitable for most people.

Saturday, January 17, 2009

Office Communicator cannot download address book

Recently I got contacted to help out with an OCS and Communicator problem.Scenario is: a simple deployment with two standard OCS servers, one Front End and one Edge.The problem was that some of the clients could not download the address book from OCS when they connected from Internet.

After some investigation I found that they used a certificate created by the internal CA, not exactly best practice but it should work.
Network monitor reveals that communicator will not even try to download the address book, it only contacts the URL and then says goodbye real quick. Well, I finally figured it out.
Certificate contains ‘Certificate Revocation List Distribution Points’ with a URL that is only reachable from the internal network. When communicator cannot reach the CRL distribution point stated in the certificate it will not go further and simply don’t download the address book.

What probably could have been done here is to make the CRL available to internet and change the path in the certificate (that means creating a new certificate). Not so simply in practice.

A simpler way is to change the client behavior not to bother with the CRL. It is not the recommended way, but it will do the job.
Change the setting in Internet Explorer advanced settings; uncheck the “Check for server certificate revocation” checkbox and you should be fine.

The best solution to this is to use a certificate from a commercial certificate authority that is trusted by most computers. With a public trusted certificate, federation and PIC/PIM can be used if there is a need.

The good side is that customer got the address book download working and they will change there own certificate for publically trusted one with reachable CRL paths and also enable federation with some of there partners.

Sunday, January 4, 2009

OWA authentication and its affect on OWA functionality

You’re probably aware of that you can configure different authentication methods for OWA (Outlook Web Access). But different authentication mechanisms for OWA alter some behavior in OWA.
When you install an Exchange 2007 CAS, OWA is configured with form based authentication (FBA). This gives the end user a HTML form to fill in with username, password and also some configuration on OWA’s behavior.

The checkbox “Use Outlook Web Access Light” gives a light version of OWA with no activeX controls or other IE only features. This is of course a less feature rich version of OWA than if you use IE. Advantage is that is consumes less bytes on the wire and is therefore suitable if you are on a low bandwidth connection. OWA light is also default selected if you use a non IE and cannot be changed.

Now to the interesting part: selection between Public and private computer.
Clicking on the show explanation link doesn’t give any good clues.

This is a public or shared computer. Select this option if you use Outlook Web Access on a public computer. Be sure to log off when you have finished using Outlook Web Access and close all windows to end your session.

This is a private computer. Select this option if you are the only person who uses this computer. Your server will allow a longer period of inactivity before logging you off.

OWA with forms based authentication saves information in a cookie on the client computer. One thing that is stored in the cookie is the idle time before the user is automatically logged off. By selecting Public computer gives the user an idle period of 15 minutes before the cookie expires and the user is logged off. The timeout period for Private computer is 8 hours. CAS is using several encryption keys and cycle them every half of the time out interval so the actual timeout is between 15 to 22.5 minutes and 8 to 12 hours for private computers.

Timeout values can be changed and is configured with registry values in the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchange OWA

Value names are “PublicTimeout” and “PrivateTimeout”, both are DWORD and numbers are in minutes.

some usefull links on the subject.

Other behavior that is dependent on the Public/Private computer selection is “File and Data Access”.
With “File and Data access” the administrator can configure what servers and file types are accessible and how users interact with files by using Allow, Block, or Force save options.This gives the administrator a very good control on how files are accessed through OWA. The good part is that all configurations regarding file access in OWA is separate for public and private computer.
This gives the administrator the ability to have different OWA behavior dependent on what user select during OWA logon. for example a user logon with public computer and tries to download a document from a file server, this is blocked by OWA, but if user logon with a private computer option selected the file is successfully downloaded to the client.

Options that can be configured are: access to windows file shares, access to sharepoint document libraries, webready document viewing, direct file access.
You can also set what file types are allowed and what types are not. Very handy and extremely useful, the only hard part is to make users select the correct option, but I guess that is up to information and education.

But what if you don’t want to use Forms Based Authentication?
There are three other standard authentication mechanisms to choose from: Basic, Windows and Digest authentication.
(Configuring Standard Authentication Methods for Outlook Web Access

With Basic authentication you will always get prompted for username and password.

Once logged on, OWA treats this session as private computer regarding to attachment and data file access, but it will not use a cookie on the client to timeout the session and credentials will be cached by the browser until it’s closed or user click on the Log Off link. With the browser caching users credentials a user can use OWA and then point there browser towards another website and then hit back again and enter OWA without typing in any credentials, so it’s important to educate users about closing the browser or hit the Log Off link to secure access to there’s mailbox.

With Digest Authentication you get almost the same behavior as with basic authentication, except that OWA doesn’t know your credentials and cannot authenticate when you try to open a fileshare or a sharepoint document library. There is an exception when this work anyway and that is if you install CAS and Mailbox role on the same server.

With windows integrated authentication you get the same behavior as with Digest authentication except that if your browser is configured to use windows integrated for your CAS server website you will get automatically logged on with the same user as you’re logged on to windows with. This gives user a seamless access to OWA with typing in credentials. Windows Integrated uses the most secure authentication scheme but hopefully you’re forcing use of SSL on CAS website so this should be no problem to use other authentication than windows integrated.

So be careful when you set your authentication scheme on your CAS, more things change than you first might realize.

If you also have an ISA in front of your CAS, the possibilities and complexity raises, but that’s another story I save for later.