How Good Is Teradata’s Intelligent Memory?

English: On December 17, 2009 30 feet chunk of...
A 30 feet chunk of the cliff below the apartment building fell to Pacific Ocean. (Photo credit: Wikipedia)

Jason asked a great question in the comment section here… he asked… does Teradata’s Intelligent Memory erode HANA’s value proposition?  Let me answer here in a more general way that is applicable to the general database space…

Every time a vendor puts more silicon between the CPU and the disk they will improve their performance (and increase their price). Does this erode HANA’s value proposition? Sure. Every advance by any vendor erodes every other vendor’s position.

To win business a new database product has to be faster than the competition. In my experience you have to be at least 30% faster to unseat the incumbent. If you are 50% faster you will win a lot of business. If you are 2x, 100%, faster you win nearly every time.

Therefore the questions are:

  • Did the Teradata announcement eliminate a set of competitors from reaching these thresholds when Teradata is the incumbent? Yup. It is very smart.
  • Does Intelligent Memory allow Teradata to reach these thresholds when they compete against another incumbent. Yup.
  • Did it eliminate HANA from reaching these thresholds when competing with Teradata? I do not think so… in fact I’m pretty sure it is not the case… HANA should still be way over the 2x threshold… but the reasons why will require a deeper dive… stay tuned.

In the picture attached a 30 foot chunk eroded… but Exadata still stands. Will it be condemned?

Note: Here is a commercial post on the SAP HANA blog site that describes at a high level why I think HANA retains a distinct architectural advantage.

2 thoughts on “How Good Is Teradata’s Intelligent Memory?”

  1. Going off-track a bit…

    “To win business a new database product has to be faster than the competition.”

    Do tech vendors really plan to compete so heavily on performance alone?

    I’ve spent many years in the field as an SE pitching various technologies on behalf on tech vendors. I’ve yet to see anyone buy a new system based on performance alone. It’s certainly a consideration, but there are many others which I’m sure I have no need to cite.

    A telco in London rejected all of the main players out of hand a few years ago based on physical footprint and power/cooling requirements alone. Even if the main players offered their tech for free it would not have worked. The compute density was too low and the power/cooling requirements were simply too high. Performance never came into it. Just one example…I know!

    It’s very difficult to accurately measure ‘speed’ even if POCs are performed. The ability to handle real-world data, schema, applications, batch scheduling, backups, tools, users etc – at the required performance level – is largely a leap of faith from whatever is observed in a POC, no matter how fast individual queries looked in the lab.

    1. Of course you are right, Paul… But IMO performance is always a primary consideration. Even in your example where power and footprint were gating factors… I would imagine that performance came next, and then after negotiating, price.

      I think that performance is measured easily, if not perfectly, in a well-crafted POC.

      Thanks for the comments…

      Rob

Comments are closed.

Discover more from Database Fog Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading