Open In App

Machine Learning Computing at the edge using model artifacts

Improve
Improve
Like Article
Like
Save
Share
Report

Doing computing intensive tasks at the resource constrained and battery dependent devices like mobile phones, personal computers, embedded processors poses a challenging tasks especially when one intended to run machine learning models as these models occupy huge chunks of memory and GPU’s in which they are incapable of. For these computing intensive tasks, the enterprises and developers largely depends on various cloud providers like Amazon Web Services, Microsoft Azure, Google cloud to name a few as they redirect the data generated from the field to the models sitting inside the cloud thus gets filtered, processed and delivers the results back to the devices again. But this comes at the cost of facing challenging issues like:

  • Latency
  • Redundancy
  • Intermittent network connectivity

Latency It is defined as the time that it takes for the data to reach the cloud, process and then get back to where it was previously generated. Latency is a critical issue where there is a necessity of making quick decisions and respond accordingly. Scenarios like monitoring of defense equipment, connected vehicles, cyclone monitoring etc., prone to latency issues. 

Redundancy With more than 75 billion estimated devices connected to the internet by 2025, redundancy plays an devastating role and if not filtered right at the edge lodges the cloud with unnecessary data with increased internet costs. 

Intermittent Network Connectivity In remote places, the internet is hardly available or if available, it is intermittent thus makes devices functioning improperly. The existence of these three factors paved the way for introducing the concept called “Computing at the Edge” or “Edge Computing”. This brings us to ask two questions: 
1) How does one can perform ML computing at the edge? 
2) Is there any cloud provider offers this service to try and do projects? 

To answer the first question, the ML computing can be performed by using “Model Artifacts” which we will explore by picking up a real world example. For the second question, currently there are three cloud providers offering this service namely AWS IoT Greengrass, Google cloud IoT, and Microsoft Azure Edge IoT. One can try any of these available services to make their hands dirty! Now, let us understand the concept of ML Computing at the edge and model artifacts by considering a real life scenario which is “Monitoring of defense equipment” where it involves latency, redundancy and intermittent connectivity issues using AWS IoT Greengrass and Amazon Sagemaker. 

AWS Greengrass IoT requires two devices 
1) Greengrass Core which runs on Raspbian OS, Ubuntu and also supports arm x86 processors. 
2) Greengrass aware devices like micro-controllers which runs on AWS FreeRTOS SDK.

 Flow Diagram

  • The image shows the general architecture of implementing AWS IoT Greengrass with node ‘A’ and ‘B’ acts as monitoring equipment (Greengrass Aware devices) and node ‘C’ as Greengrass Core.
  • The hardware at the monitoring site be a micro-controller/microprocessor attached with sensors that monitors the real time data about the status of the equipment. The micro-controller/microprocessor must requires a communication equipment which forms local network with the Greengrass core thus communicates the measured information with the core.
  • Apart from hardware implementation, at the greengrass core device, the pre-trained model artifacts and local lambda functions has to be run to take decisions at the edge whenever the data from the monitoring equipment arrives without depending upon the cloud as shown in the figure below.
  • The model artifacts are generated by training the data-sets to the chosen algorithm in amazon sagemaker notebook instance. Generally speaking, the model artifacts consists of the weights of the trained model on the given datasets and are few mega bytes to giga bytes of sizes. Thus once these artifacts are deployed to the greengrass core (Resource constrained devices) acts as if they are processing the information inside the cloud.
  • The below picture shows the architecture of the whole system.

This architecture outlines the background flow from one service to the other.

  • In defense environment, the local devices acts as greengrass aware devices forming local network with greengrass core device which consists of local lambda and model artifacts.
  • On the right panel of the picture, we can see all the AWS services involved. The datasets are being stored in S3 buckets and then transferred to the sagemaker for training the model and once done, the model artifacts then stored in the S3 bucket.
  • These artifacts along with lambdas were attached to the greengrass core and makes it to deploy in the defense environment. The data from the environment be stored in dynamodb periodically and the defense users/technicians can access the data to check for any error via the web application.
  • This web application is accessed by providing credentials and these are redirected to amazon cognito. Amazon Cognito checks the credentials and decides whether to access the data from the dynamodb. The communication between the defense environment and AWS IoT core can be done via MQTT Protocol with appropriate x.509 certificates as shown in the figure.

On summary, the applications are not limited when one intended to do computing at the edge. This article outlines the fundamental requirements for one to taste the flavor of edge computing by relating the defense architecture to your intended work.


Last Updated : 09 Feb, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads