My 4 Cents: Vertica and Paraccel 1Q2013

English: Large amount of pennies
(Photo credit: Wikipedia)

Summary

Vertica is the product I saw the most. In fact before they were acquired they were beginning to pop up a lot. The product is innovative in several dimensions. It is wholly column-oriented with several advanced columnar optimizations. Vertica has an advanced data loading strategy that quickly commits data and then creates the column orientation in the background. This greatly reduces the load time but slows queries while the tuples are transformed from rows to columns. Vertica offers a physical construct called a projection that may be built to greatly speed up query performance. Further, they provide a very sophisticated design workbench that will automatically generate projections.

Paraccel is the company I saw the least. I never saw it win… but I know that it does, in fact, win here and there. My impression, and I use this fuzzy word intentionally as I just don’t know, is that Paraccel is a solid product but it does not possess any fundamental architectural advantages that would allow it to win big. Every product will find an acorn now and again. The question is whether in a POC with the full array of competitors there is a large enough sweet spot to be commercially successful? I think that Paraccel cannot win consistently against the full array. Paraccel is now the basis for the Amazon Redshift data warehouse as a service offering. This will keep the product in the game for a while even if the revenues from a subscription model do not help the business much.

Where They Win

Vertica wins when projections are used for most queries. They are not likely to win without projections. This makes them a very effective platform for a single application data mart with a few queries that require fast performance… or for a data mart where the users tend to submit queries that fit into a small number of projection “grooves”.

Paraccel wins in the cloud based on Redshift. They can win based on price. They win now and again when the problem hits their sweet spot. They win when they compete against a small number of vendors (which increases their sweet spot).

Where They Lose

Vertica loses in data warehouse applications where queries cut across the data in so many ways that you cannot build enough projections. Remember that projections are physical constructs with redundant data.

Paraccel loses on application-specific marts and other data marts when the problems fit either Vertica’s projections or Netezza’s zone maps. They lose data warehouse deals to both Teradata and Greenplum when the query set is very broad. They will lose in Redshift when performance is the key… maybe. I have always thought that shared-nothing vendors made it too hard and too expensive to scale out. It should always have been easier to add hardware to improve performance than to apply people to tuning… but this has not been the case… maybe now it is (see here)?

In the Market

Since the HP acquisition the number of times Vertica shows up as a competitor has actually dropped. I cannot explain this but HP has had a difficult time becoming a player in the data warehouse space and had several false starts (Neoview, Exadata, …). The product is sound and I hope that HP figures this out… but HP is primarily a server vendor and it will be difficult for them to sell Vertica and stay agnostic enough to also sell HANA, Oracle, Greenplum, and others.

The Amazon Redshift deal breathes life into Paraccel. They have to hope that the exposure provided by Amazon will turn into on-premise business for them. They are still a venture-funded small company who has to compete against bigger players with larger sales forces. It will be tough.

My Guess at the Future

I worry about Vertica in the long run.

Until the Amazon deal I would have guessed that Paraccel was done… again, not because their technology was bad… it is not… but because it was not good enough to create a company that could go public and there was no apparent buyer… no exit. The Amazon Redshift deal may provide an exit. We will see? Maybe Amazon can take this solid technology into the cloud and make it a winner?

The Cost of Dollars per Terabyte

(Photo credit: Images_of_Money)

Let me be blunt: using price per terabyte as the measure of a data warehouse platform is holding back the entire business intelligence industry.

Consider this… The Five Minute Rule (see here and here) clearly describes the economics of HW technology… suggesting exactly when data should be retained in memory versus when it may be moved to a peripheral device. But vendors who add sufficient memory to abide by the Rule find themselves significantly improving the price/performance of their products but weakening their price/TB and therefore weakening their competitive position.

We see this all of the time. Almost every database system could benefit from a little more memory. The more modern systems which use a data flow paradigm, Greenplum for example, try to minimize I/O by using memory effectively. But the incentive is to keep the memory configured low to keep their price/TB down. Others, like Teradata, use memory carefully (see here) and write intermediate results to disk or SSD to keep their price/TB down… but they violate the Five Minute Rule with each spool I/O. Note that this is not a criticism of Teradata… they could use more memory to good effect… but the use of price/TB as the guiding principle dissuades them.

Now comes Amazon Redshift… with the lowest imaginable price/TB… and little mention of price/performance at all. Again, do not misunderstand… I think that Redshift is a good thing. Customers should have options that trade-off performance for price… and there are other things I like about Redshift that I’ll save for another post. But if price/TB is the only measure then performance becomes far too unimportant. When price/TB is the driver performance becomes just a requirement to be met. The result is that today adequate performance is OK if the price/TB is low. Today IT departments are judged harshly for spending too much per terabyte… and judged less harshly or excused if performance becomes barely adequate or worse.

I believe that in the next year or two that every BI/DW eco-system will be confronted with the reality of providing sub-three second response to every query as users move to mobile devices: phones, tablets, watches, etc. IT departments will be faced with two options:

  1. They can procure more expensive systems with a high price/TB ratio… but with an effective price/performance ratio and change the driving metric… or
  2. They can continue to buy inexpensive systems based on a low price/TB and then spend staff dollars to build query-specific data structures (aggregates, materialized views, data marts, etc.) to achieve the required performance.

It is time for price/performance to become the driver and support for some number of TBs to be a requirement. This will delight users who will appreciate better, not adequate, performance. It will lower the TCO by reducing the cost of developing and operating query-specific systems and structures. It will provide the agility so missed in the DW space by letting companies use hardware performance to solve problems instead of people. It is time.

Exit mobile version
%%footer%%