Open In App

Machine Learning Computing at the edge using model artifacts

Doing computing intensive tasks at the resource constrained and battery dependent devices like mobile phones, personal computers, embedded processors poses a challenging tasks especially when one intended to run machine learning models as these models occupy huge chunks of memory and GPU’s in which they are incapable of. For these computing intensive tasks, the enterprises and developers largely depends on various cloud providers like Amazon Web Services, Microsoft Azure, Google cloud to name a few as they redirect the data generated from the field to the models sitting inside the cloud thus gets filtered, processed and delivers the results back to the devices again. But this comes at the cost of facing challenging issues like:

Latency It is defined as the time that it takes for the data to reach the cloud, process and then get back to where it was previously generated. Latency is a critical issue where there is a necessity of making quick decisions and respond accordingly. Scenarios like monitoring of defense equipment, connected vehicles, cyclone monitoring etc., prone to latency issues. 



Redundancy With more than 75 billion estimated devices connected to the internet by 2025, redundancy plays an devastating role and if not filtered right at the edge lodges the cloud with unnecessary data with increased internet costs. 

Intermittent Network Connectivity In remote places, the internet is hardly available or if available, it is intermittent thus makes devices functioning improperly. The existence of these three factors paved the way for introducing the concept called “Computing at the Edge” or “Edge Computing”. This brings us to ask two questions: 
1) How does one can perform ML computing at the edge? 
2) Is there any cloud provider offers this service to try and do projects? 



To answer the first question, the ML computing can be performed by using “Model Artifacts” which we will explore by picking up a real world example. For the second question, currently there are three cloud providers offering this service namely AWS IoT Greengrass, Google cloud IoT, and Microsoft Azure Edge IoT. One can try any of these available services to make their hands dirty! Now, let us understand the concept of ML Computing at the edge and model artifacts by considering a real life scenario which is “Monitoring of defense equipment” where it involves latency, redundancy and intermittent connectivity issues using AWS IoT Greengrass and Amazon Sagemaker. 

AWS Greengrass IoT requires two devices 
1) Greengrass Core which runs on Raspbian OS, Ubuntu and also supports arm x86 processors. 
2) Greengrass aware devices like micro-controllers which runs on AWS FreeRTOS SDK.

 Flow Diagram

This architecture outlines the background flow from one service to the other.

On summary, the applications are not limited when one intended to do computing at the edge. This article outlines the fundamental requirements for one to taste the flavor of edge computing by relating the defense architecture to your intended work.

Article Tags :