Coarse-grained Cloud Economics

Let’s further discuss DW economics in the cloud.

Decoupled Multi Instance DW
Fig. 1 The Same Old Multi-DW Picture

Imagine that DW#2 in Figure 1 supports queries from your corporate finance department who are, on the average, active twelve hours a day M-F, from 6A to 6P. Further, let’s imagine that in the cloud nodes cost $4/hour.

If DW#1 and DW#3 run 24 by 7, 360 days per year, then the combined cost of the two is $483,840/year.

If DW#2, the finance cluster, runs on elastic cloud infrastructure, the annual cost will be $4*12*360 = $17,280. If the finance cluster runs in-the-cloud, but on infrastructure that is not elastic, the price will be $4*24*360 = $34,560, or twice as much.

Note that DW#1 and DW#3 may also be able to turn on and off, or scale up and down with an average reduction in the resources required, reducing the costs for those applications.

Your cloud provider is responsible for providing elastic compute on-demand. They have to deliver resources when the financial folks request them. If on any given day someone works over, the servers must be available. If servers handle the overtime with just one node, your application should elastically reduce the cluster size. If, during quarter-end, finance requires more nodes to support a trial-close process, scaling up occurs.

The point is that finance needs an average of 12 hours per day. They do not need a static configuration.

In the CIO world, elasticity may feel problematic. Elasticity supports variable prices. Static costs are predictable. CIOs want predictability, but it would be silly for a CIO to desire predictable high costs over low variable costs.

High costs come from elastic infrastructure without a scalable application or database. Imagine an application where we might double the resources and the cost per minute but not halve the number of minutes.

I mentioned before and will say again: cloud-native computing holds the ability to economically apply massive compute to a scalable problem to reduce response times by factors of a hundred or a thousand.

Cloud computing is not about taking short-term advantage of current tax laws to move some capital budget to the operating budget. If, as a CIO or CFO, you can make this immediate move, good-for-you. I believe that as CIO or CTO, you need to understand the longer-term implications and begin the process of moving to cloud-native technology. There is where cost-reduction lies.

Wrapping up: using massive compute to improve performance allows your top database architects to focus on applying new technologies to new business opportunities rather than using their rare skills to tune around performance issues in current applications. Using massive compute to improve performance reduces the time from when information arrives to the time when that data becomes actionable by your business. Using massive compute provides the ability to implement new advanced compute-intensive analytics to optimize the business further.

I apologize for the preachy nature of this last paragraph, but I believe that cloud computing supports multiple business application paradigm-shifts, and I want to incite leaders to act. It is not just about out-sourced infrastructure. I hope that the explanations help make the opportunities clear.

Next time I’ll start showing how database functionality can be further optimized to run ever more economically.