Backup software vendors are fond of saying “It’s all about the recovery.” It’s true that recovery is a critical part of data protection. While no one is watching over your shoulder while you are doing the backup, you can be sure everyone is watching during the recovery, which is why it is so important that things go well. But in order to be confident that things will go well, you have to know from the beginning that you are recovering viable data. Thus, a critical step in the backup process is verifying the data. Yet, this is one area where legacy backup solutions often fall short. They use methods that are not always reliable. For instance, legacy backups usually check the data by performing a block-by-block comparison of what is on the backup media to what is on the application server. This granular check of the data takes such an agonizingly long time that some data centers simply turn off the data verification feature. At this point, the IT manager is hoping on a wing and a prayer that if disaster strikes, things will turn out okay, but there is no way of knowing for sure.
Even if an organization does find time to run the verification checks, those checks do not guarantee a foolproof recovery. Legacy systems only check to see that the backup is complete. They do not check to see that the data they are backing up is viable. If you’re source block contains bad data, then you are simply backing up bad data. Verification will only tell you there is a 100% chance of your being able to recover corrupted data. Another form of verification is to actually recover the data and see if it works. Some companies have a policy wherein they take their data to a lab and perform a restore of the critical apps as a complete test to see if the recovery works and they are prepared for disaster. The testing is laborious. It requires locating a test server and making sure it is properly configured and ready for data to be restored. Even though virtualized servers ease the process of setting up a server, the test process itself still takes hours. Due to the time and effort involved, these tests become major events, which many data centers only undertake every quarter or year. In the meantime, if an error creeps into the data protection process, there is no way of finding out until the next testing event, which could be months away. This type of testing comes at significant cost and work stoppage. In short, it becomes an event that no one on the IT team looks forward to. Making sure the data is recoverable under any and all circumstances is the single biggest challenge for IT managers, which is why many data centers are starting to turn away from legacy systems and look to next-generation backup and recovery solutions. Next generation data testing allows for servers to be exported as virtual instances or virtual machines (VMs), a process called “creating a virtual standby.” With the click of a mouse, a user can create a VM, restore to it, and then test it to make sure a future recovery will perform properly. This process takes minutes instead of the days involved in the legacy testing methods. Problems or changes to the environment that may have caused a recovery failure can be quickly caught and corrected while those changes are still fresh in everyone’s minds. The end result is a day-to-day confidence that data is being protected. Learn more about the next-generation backup and recovery solutions. Download a free 14-day trial of AppAssure software.
Related Articles -
Windows Server Backup, Windows Server 2003 backup, SQL Server backup, Backup and Replication, backup server, VMware backup, Hyper-V backup, SharePoint Backup,
|