Skip to content
Industrial Tech

The IoT EDGE – what it means, and why

By Isaac Brown

Internet of Things stakeholders talk a lot about the edge these days – some of the noise is just hype, but some of it is real. What are we even talking about and why do people care about the edge?

The edge typically implies being at or near the physical location (machine, facility, etc) where data is generated, and the term is often used to imply direct contrast to the cloud: “edge vs. cloud” is a common phrase when discussing IoT architectures. You might (might) consider the backhaul network (fiber, cellular, LoRa, etc) to be the line where the edge ends.

Much of the tech development for the edge is focused on enabling operators to run applications, process data, and perform analytics locally. This is often done specifically to avoid sending data to the cloud. Vendors of products targeting the edge boast of myriad use cases for why operators need these capabilities, but there are two use cases that stand out – cost and latency. You might even say all IoT use cases derive from cost, but let’s not get too philosophical for the scope of this article…

Cost as feature of edge computing is straightforward – sending data to the cloud is not free. This is less of a concern in a fixed location with Ethernet in place; but for remote, distributed, and transient assets that send data over cellular & satellite, the cost of data transmission can be prohibitive to the success of IoT deployments. The ability to process some of the data locally and only send back the important information is essential to driving ROI in these kinds of deployments.

For example, consider offshore oil platforms, trains moving through remote locations, agricultural equipment in the field, and even something as familiar the connected car… connecting these kinds of assets takes special care with regard to the cost of data transmission. A dozen sensors monitoring the health of a single elevator can generate gigabytes of data daily (perhaps more), and many elevator OEMs have introduced cellular-connected elevators – it should be obvious why all that data can’t be sent to the cloud for processing. People talk about airplanes generating petabytes of data in a single flight – that would be a serious satellite data bill across an airline fleet!

Both cost and latency can be improved via the concept of pushing new analytical models to the edge.

How do edge technologies reduce the cost of transporting data? There are a number of methods, and you might say they fall under the umbrella of “edge analytics”. Some of these methods are quite basic – it is standard practice to perform exception and status change filtration at the edge: data is only sent to the cloud if something deviates in an important way. Data compression is also standard practice, and some vendors claim they can compress IoT data by more than 90% for transmission without significant losses. Finally there are vendors selling technologies that enable operators to run local machine learning & predictive analytics models, although we don’t see widespread adoption of these technologies (yet).

Latency improvement is another benefit of edge computing – suppose an operator has a critical machine operating in a remote location, and sensors indicate an imminent failure that could be predicted by statistical models. If these models are running at the edge (literally at the machine), they could actuate a corrective action immediately (perhaps in milliseconds). In contrast, if the machine had to call the cloud for the statistical model and the resulting corrective action, it might take several seconds… and perhaps that time differential would be the difference between machine reliability and failure. The scenario I just outlined does not apply to all machine failure scenarios (a few seconds making all the difference), but it illustrates the point – and there are certainly other use cases around productivity, efficiency, sustainability, and safety where a few seconds can help.

Both cost and latency can be improved via the concept of pushing new analytical models to the edge. In this scenario, data is sent from the edge to the cloud in order to develop advanced analytical models in the cloud – thus leveraging the data from across an entire fleet of machines, as well as the processing horsepower of the cloud. These models are then pushed back to storage & compute resources at the edge, so machines can act independently on the local models without the need to call the cloud. There is a constant feedback loop between the edge and the cloud to continuously optimize and develop these models as time goes on – but in theory, less and less data is sent to the cloud over time as the models improve.

So what does all this mean for the future of IoT architectures? It is safe to assume that over time, more and more data processing will be performed at the edge. This will be driven by basic improvements in the storage & compute capabilities of resources at the edge, and by the ability to run more advanced machine learning solutions in constrained IoT environments. The cost of connectivity and cloud resources will of course continue to fall, but it will still be essential to shift more of the load away from the cloud – especially as the volume of IoT data being generated continues to explode, and as more operators want to run closed-loop systems with automated corrective actions.

Read More About Landmark
in the News.

CONTACT

Get in touch with us

    Supported file types: pdf, doc, docx (max file size: 5MB)

    Attach Files

    Thank you, your message has been sent.