The biggest challenge with Exchange backups facing administrators today is the time factor. This factor comprises both the time necessary to perform the backup and the time necessary to complete the restore. Users want to have more data in their mailboxes; but in order to adhere to the SLA commitment, quotas must be set on users’ mailboxes. The end result? Users get frustrated from having to constantly manage their mailboxes in order to maintain a pre-set quota limit.
This situation is most noticeable if you use the old traditional streaming backup API that has been in Exchange since the beginning and is still there today. Since there has not been any active development, however, the recommended backup method is to leverage VSS (Windows Volume Shadow Copy Service). http://msexchangeteam.com/archive/2008/08/25/449684.aspx
With streaming backup API, you transfer every bit and byte of Exchange database to the backup software while performing a full backup. VSS operates a little differently. Instead of streaming the database file, it reads changed blocks from disk and then hands it over to the backup software. This procedure saves a lot of time when doing differential or incremental backups. Depending on how you configure the backup software to handle VSS, it can perform tasks such as incremental backups while keeping track of which blocks belong where. From that, an administrator can build up a complete backup in case he or she needs to do a restore. The benefit is that small backups are always done--only the restore is large and takes a long time.
When doing backups with streaming API, it automatically performs a consistency check by reading every byte of the database file. With VSS, however, you must tell the backup software to do the consistency check, otherwise you risk backing up a bad database file.
An option is to allow your backup software to integrate with Exchange and permit the storage to leverage functionality found on the storage to handle copies of LUN’s. In this case, the backup will only take a few seconds as far as the Exchange server sees it. You should still complete the time-consuming task of transferring the files located on the backup LUN to tape or other disk. This solution is dependant upon whether or not your storage is configured with this functionality and whether or not your backup software can do the integration.
No matter what backup method you use, all the backup data must be stored somewhere.
What if there was a way to have all your Exchange data spread out to multiple copies that automatically were kept in sync? With Exchange 2007 CCR together with SCR, you could make this happen. CCR is built upon a Window’s failover cluster, and data is replicated between the two nodes automatically. This produces two copies of Exchange data, and by introducing an SCR you can also introduce a third copy.
What happens if you need more? Simply add additional SCR copies. SCR are techniques that do not replay the copied transaction log files at once; they have a “lag” before doing the replay into the Exchange database. Lag time can be configured so that you can run a CCR with instant replaying of transaction log files to have two copies of Exchange data always updated. Then, you can have one SCR copy lagging behind a couple of hours and additional SCR copies lagging behind a couple of days.
http://msexchangeteam.com/archive/2006/08/09/428642.aspx http://msexchangeteam.com/archive/2007/07/19/446454.aspx
A common scenario is a user who has accidently deleted something from his or her mailbox. This delete instruction has replicated to the two CCR nodes, but the delete instruction may not have been replayed to one of your SCR copies. In this case, you can still recover the deleted mail from your SCR copy by stopping replication and then replaying the transaction log to the SCR database. Finally, mount the database and retrieve the data.
Another cumbersome task is managing the huge amount of data in Exchange. This data causes a lot of IO to the disk subsystem where Exchange has its databases and transaction log files. This is another reason to keep the databases small.
With Exchange 2007 the IO drop by 70% so you can allow the databases to grow without spending a fortune on the storage system. http://msexchangeteam.com/archive/2008/07/10/449188.aspx
So what about the idea that you run Exchange 2007 in a CCR and SCR configuration to be sure that you have multiple copies of Exchange data around? You allow users to have large mailboxes and perhaps you switch your storage solution from an expensive SAN, both money-wise and technology-wise (i.e. specially- trained personnel, fiber channel technology etc.), to a cheaper solution like DAS or iSCSI.
The benefits of this switch would include cheaper storage, meaning that users will no longer have to delete items from their mailboxes and move them into a PST file. With large mailboxes, the need for a restore is minimized and, therefore, also the need for doing backups.
But there still can be a need for doing backup and restore if your databases and disk fails. Should this happen, you have multiple copies of Exchange data already. Plus, it is a very simple process to make the other node of the cluster start acting as your Exchange server with a healthy set of data--this is in most cases done automatically. You can also manually activate the SCR copy with its own set of data.
All this information pertains to Exchange 2007, but Exchange 2010 will solve this problem with its new cluster technique, Database Availability Group or DAG for short. http://technet.microsoft.com/en-us/library/dd633496(EXCHG.140).aspx
So the question is why do you do backup of your Exchange data? Some may say that they need to store a copy of all data offsite. Although this is a good and valid reason, it will also be solved by having the SCR copy located somewhere else. It could even be on the other side of the world!
I am not saying that running without backup will suite everyone, but chances are you will discover that backups are not as necessary as you initially thought.