Data management challenges in fog-to-cloud systems

Reading Time: 3 minutes

 

Posted by  Toni Cortès, Storage Systems Group Manager, Barcelona Supercomputing Center

and Anna Queralt, Senior Researcher, Barcelona Supercomputing Center

 

In the last years we have seen an increase in the existing amount of connected devices, from activity trackers to smart cars or smart homes. This new paradigm is known as the Internet of Things (IoT), that is, the networked connection of all kinds of devices. In view of the current situation, it is evident that the huge amount of connected devices envisioned by many studies and reports (in the order of tens of billions of devices) will be a reality in the upcoming years. This scenario requires to rethink the way in which computation and data have traditionally been managed in order to exploit the new possibilities offered. For instance, services can use fog or edge resources (things) when fast responses are required, and combine them with cloud computing capabilities to perform more sophisticated and computing-intensive analytics.

The fog-to-cloud environment can be considered as a very dynamic network where each node is a device, potentially mobile, and potentially with small capacity. This setting makes a great difference with regards to traditional distributed environments, where one can assume that failures and disconnections are more an exception than a rule. This scenario, combined with the fact that data is key in this paradigm, requires new ways of approaching data management to provide the required availability of the data at any time.

First, data has to be somewhat global but not necessarily fully global. This means that a given piece of data residing in a node may be needed by a service running in the nearby nodes, but it becomes irrelevant to nodes far away. For instance, the characteristics of a given node may be interesting to other nodes that are currently close by, but not to the rest of the system. Another example in the context of smart cities can be the amount of transit in a certain street, which may be relevant to traffic lights nearby, but not to the ones in another neighborhood. Thus, this kind of environments need a mechanism to define the visibility of data on a per object basis, since not all pieces of data require the same visibility.

Second, nodes holding a shared piece of data may enter and leave the system at any time. Their presence may even be intermittent, but this should not prevent other nodes from accessing their data. This means that fog-to-cloud systems need to offer mechanisms such as replication to guarantee that the nodes needing a piece of data will have a way to access it, even if the originating node is not available at a given time.

Furthermore, these replicas need to follow different synchronization policies depending on the data itself. Some kinds of data must be synchronized at all times in all nodes where they are visible. An example could be the current position of a police car or an ambulance, which must be precise in the event of an emergency. However, for many other kinds of data, more relaxed synchronization policies will suffice. For instance for the temperature in the street, which does not change dramatically, an eventual update would be enough, thus avoiding unnecessary communications and data transfers.

In addition, not all nodes need to have exactly the same view of a certain piece of data, that is, a “replica” does not need to be an exact copy of the original object, but contain a summary of the information held by one or more objects. For instance, one may not need the exact temperature of all sensors in a city, but just the average per square kilometer.

Third, these behaviors have to be defined and managed as part of the data store and not within the applications or services accessing it. Otherwise this would cause a significant loss in productivity in the development of services and/or applications, and would also increase the potential unintentional misbehaviors, which could be critical in a system where data is used by different independent services.

And finally, like in any distributed system, all these behaviors have to be offered without a single point of failure. Actually, this requirement becomes even more critical in a fog-to-cloud environment where, by definition, many of the devices will have intermittent or spontaneous participation in the system.

In mF2C we are taking advantage of the dataClay technology, developed at the Barcelona Supercomputing Center, in order to fulfill all these requirements. dataClay is an object store where data is defined using Java or Python objects and their behavior can be defined programmatically, independent of applications. In mF2C, we have exploited this feature in order to provide different synchronization policies, such as consistency between replicas or aggregation, depending on the data type. These policies can be inherited by the required data types and, thus, such behaviors will become part of the objects of that type. Furthermore, dataClay offers federation mechanisms to enable two dataClay instances to temporarily share a subset of their data objects, as if they belonged to both of them.