
The following performance numbers are being reported publicly for HANA:
- HANA scans data at 3MB/msec/core
- On a high-end 80-core server this translates to 240GB/sec per node
- HANA inserts rows at 1.5M records/sec/core
- Or 120M records/sec per node…
- Aggregates 12M records/sec/core
- Or 960M records per node…
These numbers seem reasonable:
- A 100X improvement over disk-based scan (The recent EMC DCA announcement claimed 2.4GB/sec per node for Greenplum)…
- Sort of standard OLTP insert speeds for a big server…
- Huge performance gains for in-memory aggregation using columnar orientation and SIMD HPC instructions…
Note that these numbers are the basis for suggesting that there is a new low-TCO approach to BI that eliminates aggregate tables, materialized views, cubes, and indexes… and eliminates the operational overhead of computing these artifacts… and still provides a sub-second response for all queries.
“eliminates aggregate tables, materialized views, cubes, and indexes… and eliminates the operational overhead of computing these artifacts” – yes when required for performance, but still may be required for usability/user convenience. Still, overall, this should be a lower TCO for operation and maintenance.
Hi Brian… Why would these be more convenient? A view that performed aggregation could replace each of these and the user would not know the difference?