Thoughts on Oracle 12c…

Plugs (Photo credit: Brad.K)


Here are some quick thoughts on Oracle 12c…


First, I appreciate the tone of the announcements. They were sober and smart.


I love the pluggable database stuff. It fits into the trends I have discussed here and here. Instead of consolidating virtual machines on multi-core processors and incurring the overhead of virtual operating systems Oracle has consolidated databases into a single address space. Nice.


But let’s be real about the concept. The presentations make it sound like you just unplug from server A and plug into server B… no fuss or muss. But the reality is that the data has to be moved… and that is significant. Further, there are I/O bandwidth considerations. If database X runs adequately on A using 5GB/sec of read bandwidth then there better be 5GB/sec of free bandwidth on server B. I know that this is obvious… but the presentations made it sound magic. In addition 12c added heat maps and storage tiering… but when you plug-in the whole profile of what is hot for that server changes. This too is manageable but not magic. Still, I think that this is a significant step in the right direction.


I also like the inclusion of adaptive execution plans. This capability provides the ability to change the plan on-the-fly if the execution engine determines that the number of rows it is seeing from a step differs significantly from the estimate that informed the optimizer. For big queries this can improve query performance significantly… and this is especially the case because prior to 12c Oracle’s statistics collection capability was weak. This too has been improved. Interestingly the two improvements sort of offset. With better statistics it is less likely that the execution plan will have to adapt… but less likely does not mean unlikely. So this feature is a keeper.


I do not see any of the 12c major features significantly changing Oracle’s competitive position in the data warehouse market. If you run a data warehouse flat-out you will not likely plug it elsewhere… the amount of data to move will be daunting. The adaptive execution plan feature will improve performance for a small set of big queries… but not enough to matter in a competitive benchmark. But for Oracle shops adaptive execution is all positive.


5 thoughts on “Thoughts on Oracle 12c…”

  1. How will “adaptive execution plans” sit with optimiser hints I wonder – sort of autopilot versus manual controls?

    It’s a smart move to change the plan when the reality of the data is at odds with the optimiser’s estimates.

    Not that we’d ever see a database with stale/no stats of course 😉

    1. Hi Paul,

      Interesting question…

      Note that there is a third issue that this addresses (besides stale or non-existent stats)… which is the case where, in a complex query plan, the extrapolation from simple table statistics to the intermediate data set after several predicates and joins and predicates and joins are applied did not hold up. In all three cases adaptive plans will help.

      If we assume that the plan will try to adapt at every step in the plan then my guess is that the adapted plan will override hints. At least this is how I would do it if I were King.

      As I said… this is a very cool, very advanced, feature. Kudos to Oracle.


  2. Hi Rob,
    just a couple of comments:
    – regarding data movement – Oracle preferred architecture is a big cluster with SHARED storage (Exadata or high-end SAN/NAS). So, there might not be a need to physically copy a PDB in order to plug it into a different container. Another example – you have a single server with container db 1, you install container db 2 (which includes a later patchset) on the SAME host, then unplug and re-plug your PDBs from CDB 1 to CDB 2 at your own pace.
    So, it might be less painful in some scenarios.
    – regarding PDB/CDB – I’m personally not excited… In most cases (for multitenancy), you could just have a schema per customer with regular access control and get that “single address space” and some isolation without paying for an extra cost option on top of Enterprise Edition.
    I wrote a longer note on this:


    1. Hi Ofir…

      If PDB’s have any sensible place as part of a cloud server farm it is about moving them from one set of servers or Exadata cluster to another to adjust utilization. If the database stays put then portability is useless. So either portability is useless, which is why you are un-excited, or it is expensive and time-consuming, which is my point.


  3. I do agree with you…
    I don’t see anyone ever choosing Oracle and PDB as part of a new cloud server farm… Or even regular Oracle EE – the DB licensing model makes sense only as infra for expensive enterprise apps anyway.
    For multitenancy, you could always use a separate schema (and tablespace) per customer, or a shared schema managed by the app tier. Makes no sense to me to pay extra for minimal gain.
    In the enterprise context – I do see two use cases:
    1. Use what’s included in EE – it has nothing to do with multitenancy. Just use it as faster, safer upgrade mechanism for a single DB.
    2. As a consolidation platform – for example, if you have a lot of DBs you can’t consolidate (like a dozen EBS DBs) and you want to squeeze them into a single shared cluster like Exadata (so you can’t virtualize them). You may find that paying extra for PDB consolidation helps you utilize Exadata better and simplify your operations.
    Anyway, for multitenancy with 1000s,10000s or more customers, I think shared schema with database enforced row-level security (like Oracle VPD) is the way to go… That might be nice to see in Hana 🙂

Comments are closed.

%d bloggers like this: