The AI rocket is powered by Edge to Cloud

Reading Time: 5 minutes

 

Posted by Marc-Elian Bégin, CEO (SIXSQ) 

 

If AI were a rocket, the first stage would be the cloud, the second the edge, and data would be the fuel

Strategic leaders investing in AI will need a complete rocket to boost business

There have been wild predictions recently that the edge will overtake, if not kill, the cloud in terms of volume of data processed. While it is helpful to raise awareness of the huge potential of edge computing, we also need to avoid oversimplification. Instead of adding fuel to the flames, I prefer to offer peace and bring those two fundamental building blocks together.

Building a powerful AI rocket

Many modern use-cases currently targeted by Artificial Intelligence (AI) require deployment at the edge. Further, as a fast growing amount of valuable data is being produced at the edge, it is also becoming a prime source of learning datasets for training AI models. As a former aerospace engineer, allow me to use the metaphor of the rocket to illustrate how many AI architectures require an intimate relationship between the cloud and the edge in order to succeed. We call this architecture edge-to-cloud to implement near data processing.

ai-rocket

 

First stage: the cloud

Like all of us, AI models require training in order to perform properly. This training in turn relies on curated data, such that the right conclusion is reached by the models systematically. Training is normally a demanding process in terms of compute resources, and normally is limited in time. This is therefore an ideal workload for the cloud. Public or private cloud services are well suited to on-demand provisioning of a large amount of resources in order to train, for example, a specific AI model, based on a large dataset.

The cloud therefore forms the first stage of our rocket. It allows the the AI rocket to liftoff, with the AI model as the main payload on its way to orbit.

Second stage: the edge

The edge fulfils a dual role in our rocket design:

  1. Curate data
  2. Execute AI model

Several analysts predict an explosion of raw data production at the edge, thanks to the fast deployment of IoT. But rocket engines don’t work well with crude oil. It requires pure energy (ideally cryogenic fuel, a clean and energy efficient producing water vapour as the result of mixing hydrogen and oxygen, but that’s not relevant to our metaphor). This means transforming raw data into curated and structured information. This refining process is best performed at the edge, in order to avoid flooding the network with raw data (and don’t get me going with crude oil pipelines). Therefore, the learning datasets that are critical to training AI models must be assembled at the edge and then transferred to the cloud.

The second role for the edge is execution of AI models. Once it has freed itself from the atmosphere and the bulk of Earth’s gravity, the rocket must inject the payload into the right orbit. This is the role of the second stage. Since many AI models are designed to provide real-time responses to live sensor data, it makes sense to execute the model close to the data. Assuming the second stage stays connected to the payload (which is a serious twist in my rocket metaphor), it can provide all the ancillary services (e.g. communication, compute, local storage, security) required for the payload to operate properly.

Payload: machine learning model

The payload of our rocket is the AI model. Like all artificial satellites, our AI payload is designed to be autonomous. Its on-board instruments are able to process data locally, react and take decisions. And from time to time, notify and inform the command and control center of significant events. As the payload orbits, its mission can evolve and change.  It is therefore important to be able to monitor, update and manage it. This is the role of command and control, to make sure that these operations are secured, but also do not affect the operational availability of the payload.  You don’t want your satellite broadcast to be interrupted when your favourite team is about to score.

It is paramount that the payload, running for example our machine learning model, is able to operate with minimum interactions with the command and control center or the first stage (assuming it has successfully landed  – thanks Mr Musk). This creates a robust and resilient system, able to scale with controlled and limited operations cost and complexity.

Fuel: data

Structured information, from raw data, is what makes our rocket fly. Collected at the edge, raw data is transformed into curated and structured information, such that only a fraction of what is produced at the edge is transferred back to the cloud (or the datacenter).

This refining process must take place at the edge for several reasons:

  1. Real-time processing
  2. Network bottleneck
  3. Autonomy, integrity and availability

Making sense of data at the edge provides the first opportunity to act on it. In many industrial processes, time is of the essence. Therefore, realtime reaction can provide you with a competitive edge, or it might simply be a business requirement. An edge platform able to take such action is a great strategy to ensure realtime guarantees.

Depending where your edge is, network jitter, interruptions and bottlenecks can be a real deal breaker. Relying on wide area network is risky, especially when coupled with realtime constraints.

Raw data can be full of gremlins, able to punch holes through your privacy policies. Transforming the data at the edge provides a unique opportunity to ensure that what comes out of the edge is clean and safe. The resulting structured information is also normally much smaller and easier to ingest by a cloud application, reducing the risk of clogging these high precision rocket motor injectors of yours.

Satellite-to-satellite communication: Edge-to-edge

As the edge grows in complexity, we already see a trend towards creating layered edge architectures. A recent example we have been working on is in the mobility business, where a public transport vehicle (an edge system) communicates directly with its neighbouring sensors, in order to provide real-time information to its control algorithm. While measurements from sensors outside its immediate area of concern can be interrogated via the cloud, sensors at its proximity cannot afford the round trip to the cloud and back, nor the risk of network interruption or disruption.

Back to our rocket metaphor, satellite constellations are a good example of edge-to-edge architectures, where local communication and work distribution at the edge makes sense.  This evolution will reinforce the edge-to-cloud approach, simply pushing the existing logic in a local arrangement of trusted edge devices, thus providing even further scope for scale.

Buckle Up

I firmly believe that AI strategies that do not include a complete edge-to-cloud architecture will fail. This is why my team and I are working hard to fill this gap and ensure we are able to launch your AI solutions into orbit. We are not alone in this business. The big cloud providers are building their own access to space, hoping to lock you in their space elevator.

At SixSq, we believe in open source and open collaboration to conduct our business. In this sense, we are building a universal translator, where you will be able to source your first stage from any public cloud provider or private cloud solution vendor. Your second stage hardware platform is also open, as we currently support most x86 and ARM architectures. And for your payload, as long as you can package your AI models as containers, game on and may the best win.

This is part of our vision of unifying the cloud and the edge, in other words, creating a fluid and frictionless continuum between the two.

 

(Acknowledge) first published in https://media.sixsq.com/blog/ai-rocket-powered-by-edge-to-cloud