Price/Performance of HANA, Exadata, Teradata, and Greenplum

Here is an attempt to build a Price/Performance model for several data warehouse databases.

Added on February 21, 2013: This attempt is very rough… very crude… and a little too ambitious. Please do not take it too literally. In the real world Greenplum and Teradata will match or exceed the price/performance of Exadata… and the fact that the model does not show this exposes the limitations of the approach… but hopefully it will get you thinking… – Rob

For price I used some $$/Terabyte numbers scattered around the internet. They are not perfect but they are close enough to make the model interesting. I used:

Database

$$/TB

HANA

$200,000

Exadata X3

$66,000

Teradata

$66,000

Greenplum

$30,000

Of these numbers the one that may be the furthest off is the HANA number. This is odd since I work for SAP… but I just could not find a good number so I picked a big number to see how the model came out. Please, for any of these numbers provide a comment and I’ll adjust.

For each product I used the high performance product rather than the product with large capacity disks…

I used latency as a stand-in for performance. This is not perfect either… but it is not too bad. I’ll try again some other time and add data transfer time to the model. Note that I did not try to account for advantages and disadvantages that come from the software… so the latency associated with I/O to spool/work  files is not counted… use of indexes and/or column store is not counted… compression is not counted. I’ll account for some of this when I add in transfer times.

I did try to account for cache hits when there is SSD cache in the configuration… but I did not give HANA credit for the work done to get most data from the processor caches instead of from DRAM.

For network latency I just assumed one round trip for each product…

For latencies I used the picture below:

The exception is that for products that use PCIe to access SSDs I cut the latency by 1/3 based on some input from a vendor. I could not find details on the latency for Teradata’s Bynet so I assumed that it is comparable with Infiniband and the newest 10GigE switches.

Here is what I came up with:

Database

Total Latency(ns) Price/Performance

Delta

HANA

90

1,800

HANA (2 nodes)

1190

23,800

13x

Exadata X3

2,054,523

13,559,854

7533x

Teradata

4,121,190

27,199,854

15111x

Greenplum

10,001,190

30,003,570

16669x

I suppose that if a model seems to reflect reality then it is useful?

HANA has the lowest latency because it is in-memory. When there are two nodes a penalty is paid for crossing the network… this makes sense.

Exadata does well because the X3 product has SSD cache and I assumed an 80% hit ratio.

Teradata does a little worse because I assumed a lower hit ratio (they have less SSD per TB of data).

Greenplum does worse as they do all I/O against disks.

Note the penalty paid whenever you have to go to disk.

Let me say again… this model ignores lots of software features that would affect performance… but it is pretty interesting as a start…

12 thoughts on “Price/Performance of HANA, Exadata, Teradata, and Greenplum”

  1. As some wise man said: give me enough data and, statistically, I will prove anything !!

    1. Ben,

      I am pretty skeptical myself… hence the caveats in the post. But just expressing skepticism is not very helpful. I would ask you to point out where I went wrong or, better still, suggest how to make the model more accurate. We all know that the models we use to predict the weather are not perfect… but they tell us something. I hope that my attempt is not useless?

      – Rob

    1. Hi Welju…

      I just used pricing for the standard appliance. I did know to include extra Oracle licenses for the extra X3 cores. Exalytics is a separate piece of hardware that offloads OLAP workload from Exadata. I would not consider it fair to add in this cost to the Exadata price (although readers might think about the price/performance implications of Exadata + Exalytics to solve what other options can solve with the base). Microsoft has announced a some new capabilities… some available now and an in-memory OLTP product that will be available in 2014-2015.

      I’ll take a closer look at the links you suggest and see if I can update the pricing… thanks.

      – Rob

  2. Let me be a little more clear here.

    The model as it stands in this first version favors products who use hardware to reduce latency. It penalizes products who have software optimizations to improve performance. When I finish version 2 which will include data transfer times… the good software that folks like Teradata and Greenplum have produced will significantly improve their standing. Further, the HW bottlenecks in products like Exadata will diminish their standing. Finally, the software tricks in HANA will mitigate the gains the other make some. We’ll see how it comes out?

    – Rob

  3. Hi Rob,

    Interesting thoughts again… But I am missing a point. I was always told that latency is something that is extremely important…. in OLTP environments.
    And in DWH/BI/Analytics latency only plays a partial role. Bandwidth is what counts (and possibly CPU horsepower, and the efficiency of the software, i.e. how much or little overhead there is to get to the results you want)
    As a thought experiment, and I forgot the exact numbers, but consider the latency of a Boeing 747 fully loaded with physical data tapes, flying from San Francisco to New York. Latency is, what, 5 hours? Extremely high. But consider the bandwidth (in GB/s)…
    Parallelism also plays a role. If your database has a latency of, say, 200 nanoseconds, and mine 500, but I can do 20 concurrent operations and you can do only 5, then who is faster?
    And what if I can move 1 MB in one of those latency restricted operations and you can do only 128K?

    So I respectfully disagree (partly) with your comparison. Partly, because latency *is* important. It’s just not the only performance metric. The 200 ns from SSD that you show in the picture is no problem if the end-to-end bandwidth can fill up the CPU L1 cache quick enough for the CPU not to become idle (or otherwise spend cycles waiting for data).

    Oh and note that the bandwidth of the average SSD is not *that* far off from a simple SATA disk (that’s why we use them in Greenplum 😉

    Best regards,
    Bart

    1. Of course I agree, Bart… this is why I suggested that this first model needed to be improved by including data transfer. I’ll get there.

      The bandwidth for the current-generation PCIe-attached SSD is significantly better than SATA… see here It was the first-gen SSDs that were only a little, 50%, faster than SATA… for sequential scans.

      Parallelism helps bandwidth but not latency. If it takes 10,000,000ns to start an I/O to disk and you have 20 parallel disks the latency is still 10,000,000ns.

      If you look here you can see that a 1MB data transfer from memory is 80X faster than from disk… and the total, latency+transfer, is 120X faster. Parallelism does not help because HANA is shared-nothing parallel as well.

      Note that I gave Greenplum a 3X advantage in price of the software to build this first model…

      Bottom Line: this is a first attempt based only on latency. I was pretty clear that latency was not perfect in the write-up… although as you point out it is pretty good for considering OLTP… and therefore interesting when thinking about real-time data acquisition and analytics… and for thinking about IMDB vs. Exadata for OLTP. I’ll stand by the blog as an interesting first attempt/exercise…

      1. Rob,

        Indeed for OLTP I expect the innovations around (real, not marketing) in-memory databases are going to make a huge difference. For BI/Analytics I also expect huge improvements in performance – mostly because of the high bandwidth you can get from RAM (or Flash memory up to a point). So I am curious for your final model when you get there 🙂

        I’m just not so sure you can always simply add latency and transfer time. And, for a spinning rust based mechanical disk, the 10,000,000 ns latency you talk about is only needed to get the heads and platters positioned for getting the first byte from disk. If you design your system well, the disk then simply starts reading whole tracks at, say, 50MB/s or more. Plus, smart cached disk arrays (i.e. EMC) have very intelligent caching/prefetching algorithms, and I have seen over 70% disk cache hit ratios on OLTP databases said to be doing 100% random workloads.

        So the next (sequential) I/O after the first one might be serviced in 0.5 ms (500,000 ns) – a 20x improvement. Granted, RAM is still much faster, but you can have more than one disk in sequential streaming mode.

        To me it seems like a near-impossible task to build a realistic mathematical model for comparing such different architectures. But kudos for trying and even more so if you really get it done 🙂

      2. For sure it will be a model, Bart… it will help to predict the weather but never be perfect. I will do my best to explain where it could fall down and leave it to the readers to decide if the predictions are helpful.

        And… I will try to split the talk between OLTP and BI/DW workloads as they are very different.

        I have two objectives:
        1) To start folks thinking at a deeper level about the piece-parts that define performance. This will become more important to the readers as the systems become more complex and we try to untangle architecture from marketing for those complex systems.
        2) To provide some models that indicate architectural advantages within an order of magnitude or two. I do not expect to predict performance.

        I’ll trust that you will let me know when I miss the mark. 😉

        It was good to see you in Santa Clara… let me know when you get into town again?

    1. Is Netezza some new, small, start-up? We hardly ever see them in the market.

      Juuuust kidding. It is late on the day before thanksgiving.

      Netezza is a disk-based system that would, more-or-less, have the same latency characteristics as Greenplum. Now that is not to say that they will have the same price/performance… but to get there I will have to go much deeper than evaluating latency. Stay tuned, Greg…

Comments are closed.

Discover more from Database Fog Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading