A Trend in Systems Architecture

I composed the video below on a contract for Intel… but they were kind enough to let me tell the story with only a lite promotional touch. I think that you will find the story interesting as it describes 20+ years of systems architecture and suggests where we may well be headed in the next 5 years…

The bottom line here is that we developed a fully distributed systems architecture over the course of 15 years in order to use the economics of microprocessors. The distributed architecture was required because no micro-based server, and no small cluster of micro-based servers, could manage an enterprise-sized workload. We had to gang micro-processors together to solve the problem. Today we can very nearly solve for an enterprise workload on a small cluster of 32-core or 64-core processors… so distribution may no longer be a driving requirement.

I’ll post a couple of more notes on this video over the next few weeks. There are two possible endings to the video and we’ll explore these future states.


About three years ago I started with SAP and early in my second week I was asked to appear before Hasso Platner and Vishal Sikka. In the five minutes before I walked in I was informed that the topic was a book they wanted me to ghost-write for them. I was flabbergasted… I had never written a book.. but so it goes. In the meeting I was told that the topic was “HANA for CIOs” and I was handed a list of forty or fifty key words… topics to be included in the narrative. We agreed that we would meet again to consider content more fully. Despite several requests… that was the last meeting I had on this subject and the project dissolved.

In the month or so before it became clear that there was no real interest in the project I struggled to figure out how to tell a story about HANA that would be compelling… rather than make the book a list of technical features. The story in the video, with the HANA ending that I will post next, was to be the story that opened the book.

8 thoughts on “A Trend in Systems Architecture”

      1. Reminds me of an early Teradata innovation; the Charles River Application Processor. I was tasked with the UK briefing of this under ther title “Peter Rix talks C…” 😉

  1. I like the video Rob – very interesting and a nice summary.

    Maybe I’m missing something, but it seems to me that the “fat” still exists in the fully distributed virtualised mainframe environment…there are still multiple OSes and networking comms involved…the hardware is just hiding the issue with raw power (more and more cores…and licenses for them, of course!).

  2. If you deploy a fully distributed application over a series of virtual machines then there is a lot of virtual fat and multiple operating systems. If you deploy a single virtual mainframe with the database, web server, and application running on one copy of LINUX; as in a LAMP deployment, then much of the fat is squeezed out. If you deploy a stateless app on LINUX with only the web server and the application, LAP if you will, and deploy the stateful DBMS across a virtual machine boundary then you inject some fat. But if you deploy both using containers like Docker that run over one copy of LINUX it becomes mainframe-like again.

    In other words… when I use the word mainframe I am suggesting that all of the stack (except the thin client) is running on the same OS instance.

    Does this help?


    1. …if I was more knowledgeable, maybe – thanks for making me go and look up a few terms on Wikipedia! 🙂

      If the stack is all running on the same OS instance, then we are limited by the maximum possible size of the hardware underneath, right? So, for a VLDB environment, for example, that might be a limitation…necessitating an architectural choice that then can’t do without some degree of “fat”, whether it be physical or virtual.

      1. VLDB is an issue as you have to share nothing ove a network… and this injects fat. But I do not see an alternative?

Comments are closed.

%d bloggers like this: