TL;DR: Soon conventional backup and restore operations will be made obsolete by storage technologies that ensure 100-percent availability -- but we're not there yet. Until the age of zero backups and continuous data protection truly arrives, we'll continue to rely on traditional database recovery techniques when disaster strikes.
In an ideal world, a database administrator who learns that the host server has conked out would simply flip a switch and the database would be ready to access, completely updated, on an alternate server. Users wouldn't even notice.
Of course, you're thinking, "That must be the world where lollipops grow on trees and airplane seats are the size of overstuffed recliners." No such automatic failure recovery exists -- yet. But innovations in data storage are inching us closer to a time when conventional backups go the way of dodo birds and buggy whips.
Until such concepts as zero backups and continuous data protection truly arrive, DBAs will rely on the tried-and-true methods of ensuring the restorability of their lost databases. For example, the Oracle support site provides step-by-step instructions for Avoiding and Recovering From Server Failure affecting WebLogic Server clusters.
The article explains how to start a Managed Server that can't connect to an Administration Server on startup. The server can retrieve its configuration file by reading its locally cached configuration data in the "config" directory, which is called Managed Server Independence mode. Similarly, MSSQLTips.com's Matteo Lorini describes in detail how to rebuild SQL Server on different hardware after a failure.
Inching inexorably toward truly automatic backups and restores
The concept of zero backup takes redundancy to the nth degree. SearchDataBackup's September 2014 publication Improve Data Protection Through Zero Backups (registration required) defines zero backup as the creation of redundant copies of data. Rather than trying to restore a database to a previous state, the goal is to keep the database running continuously -- even when a hardware or software component fails. If the system replicates a corrupted file, the data can be rolled back to a specific time prior to the corruption. Snapshots are often used in conjunction with replication to ensure a sufficient number of potential rollback points.
Snapshots are also a key aspect of continuous data protection systems, which TechTarget's Brien Posey explains in an October 2014 article, "Data backup strategy from a disaster recovery perspective." Continuous backup differs from traditional backup -- to tape drives or other removable media -- primarily by creating a single backup copy with multiple rollback points for each data element.
To prevent the backup server and backup target (storage array) from creating a single point of failure, continuous data protection systems usually replicate their contents to a secondary storage device, whether another storage array, a cloud storage service, or removable media.
More and more organizations are choosing the BitCan cloud storage service as the replication component of their continuous data protection strategy. BitCan provides automatic backups of heterogeneous MySQL and MongoDB databases, as well as Unix/Linux systems and files. You can set and schedule your backups in seconds -- with no client-side installs or plug-ins.
BitCan encrypts your data at the communication and storage layers. You schedule backups and restore your data using a simple point-and-click interface. You pay for only the storage space you need, and your data is kept safe on Amazon's S3 servers. Visit the BitCan site for a free 30-day trial account.