This is a simple C++ demo application that uses the ExecuTorch library for MobileNetV2 model inference.
-
Export the model. See mv2/python/README.md
-
The ExecuTorch repository is already configured as a git submodule at
~/executorch-examples/mv2/cpp/executorch/
. To initialize it:cd ~/executorch-examples/ git submodule sync git submodule update --init --recursive
-
Install dev requirements for ExecuTorch
cd ~/executorch-examples/mv2/cpp/executorch pip install -r requirements-dev.txt
-
Build the project:
cd ~/executorch-examples/mv2/cpp chmod +x build.sh ./build.sh
-
Run the demo application:
./build/bin/executorch_mv2_demo_app
- CMake 3.18 or higher
- C++17 compatible compiler
- ExecuTorch library (release/0.6)
- Make sure you have the correct model file (
.pte
) compatible with ExecuTorch. - This demo currently initializes the input tensor with random data. In a real application, you would replace this with actual input data.