Open In App

Getting started with the Google Coral USB accelerator

Improve
Improve
Like Article
Like
Save
Share
Report

In this article, you’ll be guided through setting up and running your first machine learning model on your RaspberryPi using the Google Coral USB accelerator.

What is it?

The Coral USB Accelerator is a USB device that provides an Edge TPU as a coprocessor for your computer. It allows the user to run inferencing of the pre-trained TensorFlow Lite models on their own hardware running Linux.

Hardware pre-requisites

As mentioned earlier, the Accelerator requires a computer running Ubunutu10.0 or higher. In this guide, we will be using a RaspberryPi for the computer. In addition to the RaspberryPi, you’ll also need one USB-A to micro-USB cable, a power supply for the RaspberryPi, a microSD card and a USB-A to USB-C cable.

Setting up your RaspberryPi

Install the latest version of Raspbian on your microSD and insert it into the RaspberryPi. If you have access to a dedicated keyboard, mouse, and display for your RaspberryPi, you can skip the next steps. In case you don’t have access to dedicated peripherals for your RaspberryPi, go ahead and connect your RaspberryPi to your personal network and enable SSH. After you’ve done that, go ahead and turn on your RaspberryPi and SSH into it through your laptop.

Setting up the software

You can download the software development kit using wget. To install wget type the below command in the terminal.

sudo apt-get install wget

Then type the following command.

$ wget http://storage.googleapis.com/cloud-iot-edge-pretrained-models/edgetpu_api.tar.gz
$ tar -xvzf edgetpu_api.tar.gz

Then run the installation script-

$ cd python-tflite-source
$ bash ./install.sh

You’ll run into the following message-

“During normal operation, the Edge TPU Accelerator may heat up, depending on the computation workloads and operating frequency. Touching the metal part of the device after it has been operating for an extended period may lead to discomfort and/or skin burns. As such, when running at the default operating frequency, the device is intended to safely operate at an ambient temperature of 35C or less. Or when running at the maximum operating frequency, it should be operated at an ambient temperature of 25C or less.”

This is something to bear in mind while working with the Accelerator. If you’re planning to use the board for extended periods, consider using active or passive cooling.

Running your first ML model

The software we downloaded in the previous step contains the Edge TPU Python module, which provides simple APIs for image classification and object detection. We’ll be using the object detection API for this example. The demo code can be found at /home/pi/python-tflite-source/edgetpu/demo.

The example script is designed to perform object recognition on an image. You can use standard images that contain everyday objects and run them through the ML model using the code mentioned below.

$ python3 ./object_detection.py \ 
--model python-tflite-source/edgetpu/test_data/ \
        mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \
--label python-tflite-source/edgetpu/test_data/coco_labels.txt \
--input "file path of input image" \
--output "file name and path of output image"

If you’re running an SSH to the RaspberryPi, you can copy the output image to your laptop using scp command.


Moving ahead

Now that you’ve run your first program, you can tinker around with the program and try out different things. You can start out by turning down the threshold level for the confidence of the classification model; this will cause the model to display the results for objects that it recognizes, but the model isn’t confident enough that it is correct. Doing this can help you understand how the inferencing model works and the factors that affect the confidence of the recognition model. After you’re confident with the code and the working of TensorFlow, you can move on to add a camera to the RaspberryPi; this will remove the need to transfer the image that needs to be referenced. Later on, you can also consider adding a small LCD to the RaspberryPi, which almost makes the need for a laptop redundant. After you’re confident enough with the TPU accelerator, you can move on to inferencing live images and object tracking using a camera connected to the RaspberryPi.
The possibilities are endless, and the scope for corrections and improvements are limitless.


Last Updated : 12 Dec, 2019
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads