Exadata 3 as an In-Memory Database (IMDB)

English: Larry Ellison lecturing during Oracle...
English: Larry Ellison lecturing during Oracle OpenWorld, San Francisco 2010 עברית: לארי אליסון מרצה בכנס אורל בסאן פרנסיסקו (Photo credit: Wikipedia)

Wikipedia defines computer memory as:

 

In computing, memory refers to the physical devices used to store programs (sequences of instructions) or data (e.g. program state information) on a temporary or permanent basis for use in a computer or other digital electronic device. The term primary memory is used for the information in physical systems which are fast (i.e. RAM), as a distinction from secondary memory, which are physical devices for program and data storage which are slow to access but offer higher memory capacity. Primary memory stored on secondary memory is called “virtual memory“.

 

The term “storage” is often (but not always) used in separate computers of traditional secondary memory such as tape, magnetic disks and optical discs (CD-ROM and DVD-ROM). The term “memory” is often (but not always) associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary memory but also other purposes in computers and other digital electronic devices.

 

To a computer program like a DBMS, memory is a resource allocated using commands like malloc() and calloc(). Note that these commands allocate primary memory using the definition above. From this you should conclude that an in-memory DBMS (IMDB) is a system that puts all of its data into memory allocated by the database program.

 

In their announcements this week Oracle states (here) that Exadata 3 is an in-memory database machine and Larry Ellison said. “Everything is in memory. All of your databases are in-memory. You virtually never use your disk drives. Disk drives are becoming passe. They’re good at storing images and a lot of data we don’t access very often.”

 

But their definition of in-memory includes SSD devices that are not directly addressable by the DBMS. In fact they use 22TB of SSDs and 4TB of DRAM. The SSDs are a cache sitting between the DBMS and disk storage. They are storage according to Wikipedia.

 

Exadata 3 is not an in-memory database machine. It takes more than lots of hardware to make a DBMS an in-memory DBMS.

 

Oracle is spewing marketing, not architecture.

 

5 thoughts on “Exadata 3 as an In-Memory Database (IMDB)”

  1. Will the impact on performance be factored based on the latency of the SSD and the bottleneck issues of the controller? I suppose the estimated impact will vary based on configuration but any ideas about how an SSD based system compared to a DBMS that accesses all data in addressable space?

    1. I believe that the Exadata 3 implementation talks to the SSD via PCI-E… no disk controller. This is a nice advance provided by Fujitsu. But the Oracle hype suggests a 10X-20X performance boost… and that includes the boost from faster processors… so there has to be a bottleneck somewhere, Nick. If you assume that Oracle has hyped the numbers then 10X plus-or-minus is not enough based on the speed of SSD.

      If the SSD cache is on the storage side, as opposed to the RAC side, then the bottleneck is described here. Watch both videos. I love this link and promote it whenever I can. Anyone considering Exadata or competing with Exadata should see these… they speak the truth.

      1. Rob – Thanks for making an important distinction. in-Memory really does mean the chips that right sit next to the CPU’s. One further note – checking the F40 spec it seems the PCI-E flash card used for Exadata flash cache has an LSI SAS controller which connects to four SSD drives. There is still a disk controller between the SSD and the CPU’s so we are back to the original point that the SSD is a buffer between the CPU’s and the disks. Also, reviewing the storage node specs for X3 show that the SSD aka Flash Cache is on the Storage side as opposed to the RAC side. So now we have to traverse a RAC side bus to the Infiniband controller, across the Inifiniband crossbar, onto the Storage side Inifinband controller, then (assume DMA) the SSD controller on the F40 and finally you have acess to the bits you need. Last I checked you couldn’t malloc() those bits!

  2. Hey, Nick and Rob. Nice to see names from the past! Rob, a couple of thoughts. Oracle actually make the distinction between flash cache (PCI connected) and SSD involves a controller. The implication is that flash cache is much faster. But it is not that simple. However you deploy flash technology it is only part of a larger systems architecture. Note that Exadata’s flash cache is sitting on the Exadata cells with most of the database logic/functionality sitting across the Infiniband in the database nodes. Furthemore, how many IOPs can the processors in the Exadata cells consume? Has I/O been the bottleneck, even with the flash cache they already had on the Exadata system? BTW, I agree that this is convenient blurring of definitions for the sake of Oracle marketing. But does that tactic surprise anyone? I would hope that actual customers and prospects are not that gullible.

    1. Hi Dan… nice to hear from you. You are right to point out that I said “SSD” when I meant “flash” below.

      Have a look at the link in my response to Nick… you’ll see in those videos the impact of the split between the database engine and the storage. For lots of good reasons database vendors are pushing query processing and analytics closer to the data. Exadata does very little processing on the data side… and the infiniband bridge between the two sides is a bottleneck when data has to cross (and data always has to cross).

Comments are closed.

Discover more from Database Fog Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading