There is a persistent myth, like a persistent cough, that claims that in-memory databases lose data when a hardware failure takes down a node because memory is volatile and non-persistent. This myth is marketing, not architecture.
Most RDBMS products: including Oracle, TimesTen, and HANA; have three layers where data exists: in-memory (think SGA for Oracle), in the log, and on disk. The normal process goes like this:
- A write transaction arrives
- The transaction is written to the log file and committed… this is a very quick process with 1 sequential I/O… quicker still if the log file is on a SSD device
- The query updates the in-memory layer; and
- After some time passes, saves the in-memory data to disk.
Recovery for these databases is easy to understand:
- If a hardware failure occurs and #1 but before #2 the transaction has not been committed and is lost.
- If a hardware failure occurs after #2 but before #3 the transaction is committed and the database is rebuilt when the node restarts from the log file.
- If a hardware failure occurs after #3 but before #4 the same process occurs… the database is rebuilt when the node restarts from the log file.
- If a hardware failure occurs after #4 the database is rebuilt from the disk copy.
SQLFire uses a different approach (from here):
“Unlike traditional distributed databases, SQLFire does not use write-ahead logging for transaction recovery in case the commit fails during replication or redundant updates to one or more members. The most likely failure scenario is one where the member is unhealthy and gets forced out of the distributed system, guaranteeing the consistency of the data. When the failed member comes back online, it automatically recovers the replicated/redundant data set and establishes coherency with the other members. If all copies of some data go down before the commit is issued, then this condition is detected using the group membership system, and the transaction is rolled back automatically on all members.”
Redundant in-memory data optimizes transaction throughput but requires twice the memory. There are options to persist data to disk… but these options provide an approach that is significantly slower than the write-ahead logging used by TimesTen and HANA (and Oracle and Postgres, and …).
The bottom line: IMDBs are designed in the same manner as other, disk-based, DBMSs. They guarantee that comitted data is safe… everytime.
See here for how these DBMSs compare when a BI/analytic workload is applied.
One thought on “Data Recovery in HANA, TimesTen, and SQLFire”
Absolutely agree that there are some persistent myths and misunderstandings about In-Memory Databases. They were bad before Oracle decided to reinvent the term, but now they are even worse: http://flashdba.com/2012/10/10/in-memory-databases-part2/
I hope more people will write about their experiences in order that the confusion can turn to clarity!
Comments are closed.