In the past few years, as cloud computing has become more accepted, it has also become more sophisticated. The number of options is growing for someone shopping for a cloud system. Two important categories are public cloud and private cloud. Now it seems that combining the two into a hybrid cloud can provide the scalability and lower costs of the public cloud, while not giving up the internal control and security for in-house systems offered by a private cloud.
This combination of public and private clouds could appeal to users of real-time data, particularly those in industries who need to keep a close eye on mission-critical systems. There may be parts of those systems that need to be shared with a wide audience of users, while at the same time protecting the core proprietary data and applications. Some companies might want to experiment first with a private cloud, and once they have experience working in a cloud environment, put some data onto a public cloud.
In any case, one of the challenges to implementing a hybrid cloud is integration. How do you ensure a seamless connection and interoperability between in-house, private cloud, and public cloud systems? On the contractual side, you’ll want to ensure that service level agreements (SLAs) between vendors are compatible. And on the technical side, you’ll need to be sure that the real-time data flow is secure, fast, and uninterrupted.
If your system meets the nine core requirements for real-time cloud systems, then it should be relatively straightforward to implement a hybrid of public and private clouds. A publish/subscribe data delivery model pushing data to the cloud through closed firewalls ensures that the data source is protected. And it should be possible to mirror data between a private cloud and a public cloud, if necessary. Mirroring allows for real-time synchronization of data in the system. For data protocols that don’t network, the system will need the ability to tunnel the data between closed firewalls, and transfer it across the network over TCP.In our opinion, a control system engineer who is looking at cloud systems to provide remote access to the plant data should be looking at the hybrid solution ahead of any other. For example, the system shown above provides several advantages over a public cloud or a remotely hosted private cloud system:
- The plant system can run normally in the event of a wide-area network outage. Remote access will be cut off, obviously, but the plant system will continue to run in isolation, unaffected by the network failure.
- People inside the plant will have access to the data at LAN speeds and latencies. Their data access will not have to make a round trip from the plant to the public cloud server and back again.
- It is possible (even recommended) to give read/write access to the private cloud server from within the plant, but not allow write access to the public cloud server users, or indeed to the public cloud server itself. This way if the public cloud server is compromised it cannot be used to compromise the plant.
- This arrangement allows the plant to isolate itself quickly from the public cloud server if necessary by terminating a single connection.
- This also allows the plant to publish only a partial data set to the public cloud. Users within the plant need access to the complete data set, but remote users may only need access to a less-sensitive subset of the data.
- It can be implemented incrementally. The private cloud server can be added to the existing plant system without disrupting it, and private users can migrate at their own pace to it. Once the private cloud server has been validated, it can be connected to the public cloud server, again without any disruption to the plant system.