As we look over the technical definitions for real-time computing, it is clear that there is no way the cloud can support hard real time performance. The cloud is not deterministic. Despite advances in networking technology, there will always be delays and network breaks, making it impossible to implement any kind of hard real-time system where missing a deadline would cause a total system failure. If we want to talk about real-time data in the cloud, we need to be clear on what we mean by real time. We need a realistic definition of “real time” for the cloud.
Someone who has done a lot of thinking about how to define “real time” is E. Douglas Jensen. Jensen has spent over 30 years developing real-time systems for industrial and military applications in institutions like Carnegie Mellon University, Hewlett Packard, and Honeywell. More recently he has worked as a consultant for MITRE and the U.S. Department of Defense, as well as in his own consulting practice, Time-Critical Technologies (TCT).
According to Jensen, once you step away from the clear-cut definition of “real time” as hard real time, things get vague. His experience is that researchers and academics working in the lab have created good theoretical models for real-time computing, but when we attempt to apply those theories in real-world situations, they are often not practical. This is particularly true in distributed systems, and the cloud is the ultimate distributed system.
On his website, Real-time for the Real World, Jensen offers a new approach to thinking about real-time systems in a realistic way. First, he says that anything other than the concept of “hard real time” has been difficult to pin down and define, even by the experts. Second, it is tough to do any kind of experimental research on real-time systems in the real world because they are typically big, complicated, expensive, and mission-critical. He then offers a reformulation of the idea of real time that has enabled him to construct highly complex real-time computing systems.
Jensen boils the issue down to two concepts: 1) time constraints (or deadlines) and 2) achieving acceptably optimal system performance within those constraints. He says, “Real-time computing is about satisfying time constraints acceptably well with acceptable predictability—according to application- and situation-specific acceptability criteria, given the current circumstances.”
As in real life, time constraints can vary in immediacy and importance. Some deadlines might be like catching a plane flight—not immediate, but important. You may have some time to get to the gate, but when the plane takes off, you’d better be on board. Other deadlines might be more like a phone call, which is always immediate but may not be important. You get no advance warning when the phone rings, but if you don’t answer until the third or fourth ring, no problem. Who is calling determines the importance, and sometimes even letting it go to voice mail may be just fine.
Also in real life, many things can happen at once. While you are rushing to catch the flight, you realize that you left your ticket at home. Just then your phone rings, someone bumps you from behind, you drop your bag, and then you notice that your wallet is missing. What to do first?
Real-time computing systems are often put under similar demands, and need to respond in the best possible way. An optimized system would meet all hard deadlines as often as possible, and minimize the number of soft deadlines missed and/or minimize the lateness of the response time. Generally speaking, this is what it means to achieve optimal system performance within given time constraints.
A realistic approach to real-time establishes that each system, large or small, fast or slow, will have its own particular time constraints and acceptable level of optimal performance. Naturally, that can include cloud systems. Taking this approach, we are now ready to look at the demands of some typical real-time systems, and how they can best be met when moved to the cloud.