
This post will tie up some loose ends around columnar processing and consider some other players I left out. Part 1 and Part 2 of this series may be found here and here, respectively.
Other DBMS
Sybase IQ
How could I forget Sybase IQ? IQ is a mature Level 3 column store with vector optimizations… and several other advanced index optimizations.
See:
- SAP Community Network: SAP Sybase IQ Indexes and Indexing Techniques
- Sybase IQ 15.3 Technical Details & pricing/Packaging
Infobright
Infobright is a level 1 column store. That is, they use column orientation to achieve compression but then store multiple columns in a data pack. The cool thing they do is that they capture detailed statistics for each data pack which allows them to answer some queries from the stats… and allows them to retrieve and decompress only packs that seem relevant to the query. It is sort of like IBM’s skip processing… only with more intelligent skipping.
See:
- The Database Revolution: Welcome to the Machine: How InfoBright Harnesses Big Data
SQL Server xVelocity
SQL Server provides a column-oriented index over base tables. This is a non-updateable redundant data structure, not a columnar DBMS… but it acts like a Level 2 column store for query processing. That is, only the columns required for a query are projected/read.
See:
- MSBI xVelocity Columnstore Index Intro
- SearchSQL Server xVelocity Columnstore Indexes in SQL Server 2012
Vectorwise
Vectorwise uses PAX and is a Level 1 column store in this regard. It provides a MINMAX index that performs the same function as a zone map or skip processing from IBM. It contains a fundamentally different execution engine than other PAX systems… that is it is not row/tuple based… but it is not a columnar engine. Although it sounds odd I believe that the Vectorwise architecture has the advantages of a Level 1 column store and some of the advantages of a Level 3 column store without the advantages of a Level 2 column store.
See:
OLTP & Data Loading
First, you should be able to see clearly from the posts why OLTP is poor for column stores… the data comes in as a row and has to be decomposed into a bit mapped, compressed, column. This is like incurring the overhead for an index update and compression at the same time. The same problem has to be overcome for data loading.
The paper referenced in Part 1 on C-Store outlined an approach to get around the issue for data loading… and Vertica is the commercial offspring of the C-Store project so it uses this approach. But the C-Store tuple-mover architecture still includes too much overhead for OLTP due to constraints tied to I/O… so HANA implemented the approach in-memory which results in a single table that supports both OLTP and analytics… supporting both write-optimized and read-optimized workloads.
JOINs
Level 1 and Level 2 columnar databases join using a row engine… that is they convert columns to rows and then join. This misses some powerful opportunities for optimization from joining native column structures.
Hybrid systems like HANA and DB2 BLU support a mixture of row and column tables and allow joins across the types. Note that early materializing, Level 2 column stores like Teradata and Greenplum also support a mix… but they materialize the columns as rows early. DB2 BLU currently converts column-oriented tables to a row orientation and joins using the row engine. In this case BLU operates like a Level 2 maturity column store. I would expect that this will change over time.
The HANA optimizer has some limited ability to join mixed tables natively. Otherwise the optimizer has the ability to decide whether to convert row-oriented tables to column orientation or to convert column-oriented tables to rows. Note that with support for a single table for both OLTP and analytics HANA users typically deploy only columnar tables and avoid orientation translation during runtime.
Analytics
If you materialize rows late in a Level 3 mature column store then you get significant performance benefits. Using SIMD instructions provides an 8X performance advantage over using conventional ADD instructions. Super-computing vector processing provides at least that much again. Loading compressed columns instead of rows reduces CPU stalls to almost nothing providing another 2X boost… and getting data from the cache instead of accessing DRAM provides a 5X-20X (average of 15X) improvement. These benefits are why BLU and HANA are so compelling when compared to early materializing, L2, implementations like Teradata and Greenplum.
Other Points
- Note that HANA does not gain a huge I/O advantage from columnar compression… this is because HANA only performs I/O on a large-scale at startup. The gains from compression for an IMDB come through efficient use of memory as compared to a Level 2, early materializing, columnar implementation.
- IBM has criticized the HANA delta store/OLTP+Analytics implementation (can’t find the post for some reason… but I know that I read it)… but I do not see the logic behind the criticism? Row orientation works best for OLTP and one way or the other you have to change the orientation if you want the benefits of columnar…