How is cloud computing like buying a cup of coffee? Joe Weinman uses a clever analogy of purchasing a cup of coffee to explain some of the factors that go into providing faster, and therefore more valuable, cloud services. I thought it would be fun to see how his coffee-buying model may or may not fit with the economics of real-time cloud computing.
To speed up the process of getting a cup of coffee from your local coffee shop (and data from your cloud service), Wienmann suggests several options for the service provider and customer:
- Optimize the process by streamlining and reducing the number of tasks that the coffee shop staff need to carry out to prepare a cup of coffee. In cloud computing, this equates to optimizing algorithms and implementing other processing efficiencies on the server.
- Use more resources, such as hiring more staff behind the counter to make and pour coffee, so that multiple customers can be served simultaneously. Cloud service providers do something similar when they provide parallel processing for large computing tasks.
- Reduce latency by opening a coffee shop closer to the customer’s office, or by the customer moving closer to the shop. We see this playing out in certain situations when cloud customers who require ultra-high-speed performance actually move their physical location to be closer to the data center.
- Reduce round trips for coffee by picking up a whole trayful of coffees on each trip to the shop. In the similar way, it is sometimes possible to send multiple requests or receive multiple replies in a single transaction with the cloud.
In addition to these four options, Wienmann suggests a rather drastic alternative: Eliminate the need for the transaction altogether. In the analogy, that would mean stop buying coffee from the shop, and either make it yourself at the office or do without. This equates to not using cloud services at all.
What is the real-time approach? Data on tap. Instead of making round-trips to the coffee shop every few hours or days, just pipe the coffee directly to the office, and let it flow past your desk, always hot and fresh, ready to be scooped up and savored. Just dip your cup into the stream.
A key conceptual shift takes place when we implement real-time cloud computing. There is no need for transactions to receive data. The cycle of request-process-reply gets replaced by an always-on stream of data. Thus there is minimal delay.
In the physical world this would be considered wasteful. I can see my grandfather, who lived through the Great Depression, recoiling in horror at the thought of those gallons per minute of undrunk coffee going down the drain somewhere. But real-time data gets generated fresh all the time, and most of it quickly vaporizes into thin air anyway. Best to put it into the hands of someone who can use it. Data on tap means there is actually less waste.
But how, some may ask, can you possibly contain it? How do you get a grip? How can you analyze a moving target? What if a highly valuable factoid escapes my cup and flows off into oblivion?
Different tools and skills are necessary for working with streaming data. High-speed, in-line analytics that can keep up with the incoming flow will help decision-makers respond to ever-changing conditions. Super-efficient real-time data historians that capture every event, large or small, will provide quick access to minute details occurring on a millisecond time scale. Even now, experts are working on advanced methods for mining through the astronomically growing collection of “big data.”
Perhaps more important than the tools, though, is a change of perspective. We need to shift our thinking from a static world to a dynamic one. Working with a data stream from the cloud offers new opportunities, and challenges our conventional thinking in some interesting ways. We may continue to buy our coffee by the cup from the local shop, but maybe soon we’ll have our data on tap.