My 2 Cents: Oracle Exadata 1Q2013

English: The logo of Oracle Corporation de:Bil...
(Photo credit: Wikipedia)

Since my blogs tend to be in response to some stimulus they may not reflect a holistic view on any particular product. The “My 2 Cents” series will try to provide a broader view…

To help pay the bills please consider this as you read on…

Summary

OK, I hate Oracle marketing (see here and here). They are happy to skirt the edge of the credible too often. But let’s be real… Exadata was a very smart move… even if it a flawed product. The flaws are painful but not fatal… and Oracle can now play in the data warehouse space in places they could not play before. I do not believe that Exadata is a strong competitor as you will see below… it will not win many “fair” POCs… but the fight will be more than close enough to make customers with existing Oracle warehouses pick Exadata once they consider the cost of migration. This is tough… it means that customers are locked in to a relatively weak alternative… and every Oracle customer (and every Teradata customer and every SQL Server customer and every DB2 customer) should consider the long-term costs of vendor lock-in. But each customer has to weigh this for themselves… and this evaluation of the cost of lock-in is about neither architecture nor marketing…

Where They Win

First and foremost Exadata wins when there is an existing data warehouse or data mart on Oracle that will have to be migrated. My recommendation to customers is that they think about this carefully before they engage other vendors. It is a waste of everybody’s time to consider alternatives when in the end no alternative has a chance… and it is a double waste to do a POC when even a big technical win by a competitor cannot win them the business.

Exadata can win technically when the data “working set” is small. This allows Exadata to keep the hot data in SSD and in memory and better still, in the RAC layer. This allows Oracle to win POCs where that can suggest a subset of the EDW data is all that is required.

Exadata can win when the queries required, or tested, contain highly selective predicates that can be pushed down in the first steps of the explain plan. Conversely, Exadata bonks when lots of data must be pulled to the RAC layer to perform a join step.

Where They Lose

Everyone who has an Exadata system or who is considering one should view the two videos here. The architectural issues are apparent… and you can then consider the impact for your workload.

As noted above… in an Exadata execution plan the early simple table scans and projection are executed in the storage layer… subsequent steps occur in the RAC layer… if lots of data has to be moved up then the cluster chokes.

There are times when the architectural limitations are just too large and a migration is required to meet the response time requirements for the business. This often happens when Exadata is to support a single application rather than a data warehouse workload… In other words, if the cost of migrating away from Oracle is small, either because the applications to be moved are small or because an automated tool is available to mitigate the cosy or because the migration costs are subsidized by another source, then Exadata can lose even when there is a migration required.

Exadata can be beat on price… unless you count the cost of migration.

In the Market

For the reasons above, Exadata wins for current Oracle customers. There was a honeymoon when Exadata was winning some greenfield deals against other competitors… but these are now more rare.

My Guess at the Future

I think that the basic architecture of Exadata is defensible… having a split configuration is , after all, not completely foreign. Teradata and Greenplum and others use master nodes split from data nodes… and this is where is I predict we’ll see Oracle go. Over time, more execution steps will move to the storage layer and out of the RAC layer and in the end, Exadata will look ever more like a shared-nothing implementation. This just has to be the architectural way forward for Exadata (but don’t expect LE to stand up anytime soon and admit that he was wrong all of these years about the value of a shared-nothing architecture).

Phil has alerted us that there will be some OLTP/BI enhancements coming (see the comments section here)… which stole away a prediction I would have made otherwise.

The bottlenecks pointed out by Kevin Closson (as above and more here) need to be addressed… but to some extent these issues are the result of hardware constraints… and the combination of better hardware configurations and the push-down of more execution steps can mitigate many of the issues.

It will be a while before the Exadata architecture evolves to a point where the product is more competitive… and from now to then I think the World will be as I described it above… Oracle zealots will pick Exadata either as a religious stance or to avoid the cost of a migration… others will mostly go elsewhere…

Coming next… my 2 Cents on Netezza…

5 thoughts on “My 2 Cents: Oracle Exadata 1Q2013”

  1. This is very closely aligned with my thoughts on Exadata – ‘good enough’ means Oracle shops will take the easy route and stick with what they (think they) know/love. Smart move all around by Oracle.

    When push comes to shove, technical strengths/weaknesses tend to play second fiddle to the fear of the costs/risks associated with migration to a.n.other vendor.

    The move to true shared nothing architecture is a big one for Oracle to make. It may never happen, who knows? Plenty of folks believe Exadata is shared nothing already. Sad but true.

    Interested to know where Teradata’s master nodes have been hiding all these years…not sure I’ve spotted one yet!!!

  2. I thought you were suggesting a *physical* master node…my mistake! That said, I don’t think I’ve ever hear the PE software described as a master node.

    To quote Teradata: “Parsing engines perform session control, query parsing, security validation, query optimization, and query dispatch.”

    The PE does have a ‘master’ type role in returning the answer set to the client, but that’s not the same as an ‘almost MPP’ physical master node that performs final aggregation/sort/merge processing…which is what I thought was being described. Phew 😉

    1. Hi Paul,

      FYI: The ‘master’ node that Greenplum provides, is called a ‘Master Node’ 🙂 is physical but it does not perform final aggregation/sort/merge. I believe that Teradata query dispatch actually dispatches each step in the execution plan. The Greenplum master broadcasts the plan to all nodes and does not re-engage until the answer set is assembled.

      My point is that if you have a front-end to your sharded database instances… maybe I should not have called it the master… where query parsing occurs, you have the basis for dispatching part or all of the query execution plan to the instances. Right now the RAC/master on Exadata dispatches only the scan-and-project steps to the storage layer. They have the basic architecture in place to push down more… and I believe they will continue to push down ever more until they are as shared-nothing as anyone…

  3. I wasn’t referring to Greenplum, or any specific vendor for that matter, when I mentioned physical master nodes. Plenty have had them after all..Netezza, Datallegro, Aster, Dataupia and of course, Greenplum!

    It’s quite a challenge to push all processing (and not just scan/project) down to a sharded database…as Oracle are no doubt aware 😉

Comments are closed.

Discover more from Database Fog Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading