Skip to content

Latest commit

 

History

History

detection

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Object detection example on Coral with TensorFlow Lite

This example uses TensorFlow Lite with Python to run an object detection model with acceleration on the Edge TPU, using a Coral device such as the USB Accelerator or Dev Board.

The Python script takes arguments for the model, labels file, and image you want to process. It then prints each detected object and the location coordinates, and saves/displays the original image with bounding boxes and labels drawn on top.

Set up your device

  1. First, be sure you have completed the setup instructions for your Coral device.

    Importantly, you should have the latest TensorFlow Lite runtime installed, as per the Python quickstart.

  2. Clone this Git repo onto your computer:

    mkdir google-coral && cd google-coral
    
    git clone https://github.com/google-coral/tflite --depth 1
    
  3. Install this example's dependencies:

    cd tflite/python/examples/detection
    
    ./install_requirements.sh
    

Run the code

Use this command to run object detection with the model and photo downloaded by the above script (photo shown in figure 1):

python3 detect_image.py \
  --model models/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite \
  --labels models/coco_labels.txt \
  --input images/grace_hopper.bmp \
  --output images/grace_hopper_processed.bmp


Figure 1. grace_hopper.bmp

You should see results like this:

INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference is slow because it includes loading the model into Edge TPU memory.
33.92 ms
19.71 ms
19.91 ms
19.91 ms
19.90 ms
-------RESULTS--------
tie
  id:     31
  score:  0.83984375
  bbox:   BBox(xmin=228, ymin=421, xmax=293, ymax=545)
person
  id:     0
  score:  0.83984375
  bbox:   BBox(xmin=2, ymin=5, xmax=513, ymax=596)

To demonstrate varying inference speeds, the example repeats the same inference five times. Your inference speeds might be different based on your host platform and whether you're using the USB Accelerator with a USB 2.0 or 3.0 connection.

To compare the performance when not using the Edge TPU, try running it again with the model that's not compiled for the Edge TPU:

python3 detect_image.py \
  --model models/ssd_mobilenet_v2_coco_quant_postprocess.tflite \
  --labels models/coco_labels.txt \
  --input images/grace_hopper.bmp