训练测试好神经网络后,导出onnx文件,然后就可以部署模型
需要提前了解导出onnx的参数是干嘛的,以及onnx文件的结构/原理
模型部署入门教程(五):ONNX 模型的修改与调试 - 知乎
AI模型部署落地综述(ONNX/NCNN/TensorRT等) - 知乎
TFLite,ONNX,CoreML,TensorRT Export -Ultralytics YOLOv8 Docs
根据部署的平台选择合适的推理加速框架,如果平台是英伟达的显卡,就用TensorRT ,如果是英特尔的CPU或GPU,就用OpenVINO 。当然都可以用ONNX RunTime
OpenVINO
安装openvino的环境:
看自己选择,推理的时候用Python写还是用C++写
按官网的方式安装OpenVINO
C++
环境安装命令:
sudo apt-get install gnupg
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
echo "deb https://apt.repos.intel.com/openvino/2023 ubuntu22 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt update
apt-cache search openvino
sudo apt install openvino
#搜索看看安装有了没
apt list --installed | grep openvino
/usr/share/openvino/samples/cpp/build_samples.sh
动态库目录在: /usr/lib/openvino-2023.3.0
VScode测试代码
#include <iostream>
#include <openvino/openvino.hpp>
int main(int, char**){
// -------- Get OpenVINO runtime version --------
std::cout << ov::get_openvino_version().description << ':' << ov::get_openvino_version().buildNumber << std::endl;
// -------- Step 1. Initialize OpenVINO Runtime Core --------
ov::Core core;
// -------- Step 2. Get list of available devices --------
std::vector<std::string> availableDevices = core.get_available_devices();
// -------- Step 3. Query and print supported metrics and config keys --------
std::cout << "Available devices: " << std::endl;
for (auto&& device : availableDevices) {
std::cout << device << std::endl;
}
}
CMakeLists.txt
cmake_minimum_required(VERSION 3.15)
project(COCO_test)
#openvino
set(LINK_DIR "/usr/lib/openvino-2023.3.0")
# set(OpenVINO_DIR "openvino_2023.3.0/runtime/cmake/")
find_package(OpenVINO REQUIRED)
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${OpenVINO_INCLUDE_DIRS})
link_directories(${LINK_DIR})
add_executable(test main.cpp)
target_link_libraries(test
PRIVATE openvino::runtime
${OpenVINO_LIBRARIES}
${OpenCV_LIBS})
Python
确保你自己创建并激活了 虚拟环境,使用命令完成 OpenVINO™ Runtime 安装
conda update --all
conda install -c conda-forge openvino
TensorRT
yolo系列模型的部署、精度对齐与int8量化加速_哔哩哔哩_bilibili
OpenCV
OpenCV DNN模块推理YOLOv5 ONNX模型方法_yolo5 onnx下载-CSDN博客
C++使用onnxruntime/opencv对onnx模型进行推理(附代码) - 知乎
未完待续......