Our review of definitions for “real time” over the past couple of weeks has brought us to the point where we are now ready to answer the question: Is the cloud capable of real-time performance, and if so, how? We have reviewed some of the technical definitions for “real time,” and considered some expert advice on a practical, realistic definition of “real time”. Now let’s see whether or not the cloud really is capable of supporting real-time computing, according to these definitions.
To sum up E. Douglas Jensen at Time-Critical Technologies, real-time computing in the real world means achieving optimal system performance within given time constraints. So, what is optimal system performance for a cloud system? And what time constraints are we considering here?
Each cloud application has its own time constraints. What seems uselessly slow for one set of users may be perfectly acceptable for others. For example, in many business applications data that is just a few seconds old is considered real time. In fact, many executives would boast about having access to up-to-the-minute data. Managers and analysts running certain industrial applications like inventory control or end-of-shift reports have similar requirements. When these users talk about real-time data in the cloud, delays of a few seconds or even minutes might be perfectly fine.
On the other hand, most operators working at a control panel in a plant expect to see things happen immediately. When they click a button, the light should come on right away, not after a few seconds, or even one second later. Values should update as they change in the process. Trend lines should be smooth curves drawn on the page, not jagged peaks that appear intermittently. As far as we are concerned, a cloud system that claims to be real time should be able to emulate that experience very closely. For our purposes, then, we can define “real time” for the cloud as follows:
“Real-time” cloud: Remote accessibility to data, with local-like immediacy.
Of course, the remote aspect of any cloud application will always have an impact. The Internet and other networks inescapably introduce latencies into the data flow. This kind of delay in delivering the data brings to mind the US Defense Department Military Dictionary definition of “real time,” which is: “Pertaining to the timeliness of data or information which has been delayed only by the time required for electronic communication. This implies that there are no noticeable delays.”
A real-time cloud system should have no noticeable delays, or at least, no more than absolutely necessary. Any intermediate software running on the cloud should support high-speed data throughput. Latencies should be no more than a few milliseconds over the network latency. The infrastructure should almost certainly be data-centric, minimizing the need to convert between HTML, XML, SQL, or other data formats.
Broadly speaking, when people are working with the system, we can aim high and set a goal for our time constraints at human response time. The user should feel like he or she is working on a local system. Any extra processing time over and above network communication time should be kept to an absolute minimum. And what about in M2M (machine to machine) applications? Here again, there should be as little delay as possible beyond any networking latencies.
Although the debate about the definition of “real time” may continue for years to come, we can glean enough for our purposes. For our real-world applications on the cloud, we can define “real time” as achieving optimal system performance within given time constraints. So, if we define our time constraints to be human response time, and accept the limitations of networking latencies on system optimization, then we can confidently assert that the cloud is capable of supporting real-time systems.