Skip to content

Deploying deep learning models on multiple platforms (OpenVINO/ONNX Runtime, etc.)

License

Notifications You must be signed in to change notification settings

guojin-yan/DeploySharp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

124 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenVINO™ C# API

简体中文| English

📚 Introduction

DeploySharp is a cross-platform model deployment framework designed for C# developers, offering end-to-end solutions from model loading and configuration management to inference execution. Its modular namespace architecture significantly reduces the complexity of integrating deep learning models into the C# ecosystem.

1. Architecture & Layered Design

  • Root namespace DeploySharp serves as a unified entry point for core features (model loading, inference, etc.).
  • Modular sub-namespaces (e.g., DeploySharp.Engine) enable clear functional layers.
  • Generic class designs support standard data interfaces for tasks like image processing/classification/detection.

2. Multi-Engine Support

  • Native integration with OpenVINO (OpenVinoSharp) and ONNX Runtime.
  • Compatibility with YOLOv5-v12 models, Anomalib, and other mainstream architectures.

3. Cross-Platform Runtime

  • Supports .NET Framework 4.8+ and .NET 6/7/8/9.
  • Deep integration with .NET NuGet ecosystem.

4. High-Performance Inference

  • Asynchronous operations (System.Threading.Tasks).
  • Batch/single-image inference modes.
  • Rich pre-/post-processing (ImageSharp/OpenCvSharp).

5. Developer Support

  • Bilingual (EN/CN) code comments and documentation.
  • log4net logging (error/warning/debug levels).
  • Visualization tools and comprehensive code samples.

Licensed under Apache License 2.0. Future updates will expand TensorRT support and optimize heterogenous computing.

📄Technical Blog

🎨Supported Models

Model Name Model Type OpenVINO ONNX Runtime TensorRT
YOLOv5 Detection
YOLOv5 Segmentation
YOLOv6 Detection
YOLOv7 Detection
YOLOv8 Detection
YOLOv8 Segmentation
YOLOv8 Pose
YOLOv8 Oriented Bounding Boxes
YOLOv9 Detection
YOLOv9 Segmentation
YOLOv10 Detection
YOLOv11 Detection
YOLOv11 Segmentation
YOLOv11 Pose
YOLOv11 Oriented Bounding Boxes
YOLOv12 Detection
Anomalib Segmentation
PP-YOLOE Detection
DEIMv2 Detection
RFDETR Detection
RFDETR Segmentation
RTDETR Detection
YOLO26 Detection
YOLO26 Segmentation
YOLO26 Pose
YOLO26 Oriented Bounding Boxes
PP-OCR v5 Detection
PP-OCR v5 Classification
PP-OCR v5 Recognize
PP-OCR v5 Det+Cls+Rec
PP-OCR v4 Detection
PP-OCR v4 Classification
PP-OCR v4 Recognize
PP-OCR v4 Det+Cls+Rec

NuGet Package

Core Managed Libraries

Package Description Link
JYPPX.DeploySharp DeploySharp API core libraries NuGet Gallery

Native Runtime Libraries

Package Description Link
JYPPX.DeploySharp.ImageSharp An assembly that uses ImageSharp as an image processing tool. NuGet Gallery
JYPPX.DeploySharp.OpenCvSharp An assembly that uses OpenCvSharp as an image processing tool. NuGet Gallery

⚙ Installation

DeploySharp includes image processing methods such as OpenCvSharp and ImageSharp, as well as support for OpenVINO and ONNX Runtime model deployment engines. Therefore, users can combine them according to their own needs and install the corresponding VNet Package to use them out of the box. The following summarizes some commonly used scenarios for installing VNet Package:

  • OpenVINO inference+OpenCvSharp image processing
JYPPX.DeploySharp
JYPPX.DeploySharp.OpenCvSharp

OpenVINO.runtime.win
OpenCvSharp4.runtime.win 
  • OpenVINO inference+ImageSharp image processing
JYPPX.DeploySharp
JYPPX.DeploySharp.ImageSharp

OpenVINO.runtime.win
  • ONNX Runtime inference+OpenCvSharp image processing
JYPPX.DeploySharp
JYPPX.DeploySharp.OpenCvSharp

OpenCvSharp4.runtime.win 
  • **ONNX Runtime inference+ImageSharp image processing **
JYPPX.DeploySharp
JYPPX.DeploySharp.OpenCvSharp
  • ONNX Runtime(OpenVINO) inference+ImageSharp image processing
JYPPX.DeploySharp
JYPPX.DeploySharp.ImageSharp

Intel.ML.OnnxRuntime.OpenVino
  • ONNX Runtime(DML) inference+ImageSharp image processing
JYPPX.DeploySharp
JYPPX.DeploySharp.ImageSharp

Microsoft.ML.OnnxRuntime.DirectML
  • ONNX Runtime(CUDA) inference+ImageSharp image processing
JYPPX.DeploySharp
JYPPX.DeploySharp.ImageSharp

Microsoft.ML.OnnxRuntime.DirectML

  Due to the influence of GPU device model and software version on using CUDA to accelerate ONNX Runtime, it is necessary to download and use according to the official version correspondence provided by ONNX Runtime. Please refer to the following link for the correspondence between ONNX Runtime, CUDA, and cuDNN:

https://runtime.onnx.org.cn/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

  The usage methods listed above can all be installed with just one click through the VNet Package. Similarly, ONNX Runtime also supports more acceleration methods, but users need to build their own code. For the construction process and method, please refer to the official tutorial. The link is:

https://runtime.onnx.org.cn/docs/execution-providers/

🏷 Quick Start

  If you don't know how to use it, use the following code to briefly understand how to use it.

ImageSharp

using DeploySharp.Data;
using DeploySharp.Engine;
using DeploySharp.Model;
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using System;

namespace DeploySharp.ImageSharp.Demo
{
    public class YOLOv5DetDemo
    {
        public static void Run()
        {
			//The model and test images can be downloaded from the QQ group (945057948)
			//Replace the following model path with your own model path
            string modelPath = @"E:\Model\Yolo\yolov5s.onnx";
            //Replace the image path below with your own image path
            string imagePath = @"E:\Data\image\bus.jpg";
            Yolov5DetConfig config = new Yolov5DetConfig(modelPath);
            //config.SetTargetInferenceBackend(InferenceBackend.OnnxRuntime);
            Yolov5DetModel model = new Yolov5DetModel(config);
            var img = Image.Load(imagePath);
            var result = model.Predict(img);
            model.ModelInferenceProfiler.PrintAllRecords();
            var resultImg = Visualize.DrawDetResult(result, img as Image<Rgb24>, new VisualizeOptions(1.0f));
            resultImg.Save(@$"./result_{ModelType.YOLOv5Det.ToString()}.jpg");
        }
    }
}

OpenCvSharp

using OpenCvSharp;
using System.Diagnostics;
using DeploySharp.Model;
using DeploySharp.Data;
using DeploySharp.Engine;
using DeploySharp;
using System.Net.Http.Headers;

namespace DeploySharp.OpenCvSharp.Demo
{
    public class YOLOv5DetDemo
    {
        public static void Run()
        {
			//The model and test images can be downloaded from the QQ group (945057948)
			//Replace the following model path with your own model path
            string modelPath = @"E:\Model\Yolo\yolov5s.onnx";
            //Replace the image path below with your own image path
            string imagePath = @"E:\Data\image\bus.jpg";
            Yolov5DetConfig config = new Yolov5DetConfig(modelPath);
            config.SetTargetInferenceBackend(InferenceBackend.OnnxRuntime);
            Yolov5DetModel model = new Yolov5DetModel(config);
            Mat img = Cv2.ImRead(imagePath);
            var result = model.Predict(img);
            model.ModelInferenceProfiler.PrintAllRecords();
            var resultImg = Visualize.DrawDetResult(result, img, new VisualizeOptions(1.0f));
            Cv2.ImShow("image", resultImg);
            Cv2.WaitKey();
        }
    }
}

💻 Use Cases

  For more application cases, please refer to:

Type Framework Link
Desktop App .NET Framework 4.8 DeploySharp.ImageSharp-ApplicationPlatform
Desktop App .NET 6.0 DeploySharp.OpenCvSharp-ApplicationPlatform
Console App .NET Framework 4.8、.NET 6.0-9.0 DeploySharp.samples
Desktop App .NET 8.0 [JYPPX.DeploySharp.OpenCvSharp.PaddleOcr ](https://github.com/guojin-yan/DeploySharp/tree/DeploySharpV1.0/applications/.NET 8.0/JYPPX.DeploySharp.OpenCvSharp.PaddleOcr)

🗂Documentation

  Explore the full API: DeploySharp API Documented

🎖 Contribution

  If you are interested in using Deploy Sharp in C # and are interested in contributing to the open source community, please join us to develop Deploy Sharp together.

  If you have any ideas or improvement strategies for this project, please feel free to contact us for guidance on our work.

License

  The release of this project is certified under the Apache 2.0 license.

🧑‍🔧 Technical Support

 If you have any questions or suggestions, feel free to reach out via the following channels:

  • 📧 GitHub Issues: Submit an Issue or Pull Request in the project repository.
  • 💬 QQ Group: Join 945057948 for faster and more convenient responses.

📢Software Notice

1. Open Source License Statement

All open source project code of the author follows the Apache License 2.0 open source agreement.

Special Note: This project integrates several third-party libraries. If the license terms of any third-party library conflict with or are inconsistent with the Apache License 2.0, the original license terms of the specific third-party library shall prevail. This project does not include nor represent the authorization declarations of these third-party libraries. Please be sure to read and comply with the relevant licenses of the third-party libraries before use.

2. Code Development and Quality Description

  • AI-Assisted Development: Artificial Intelligence (AI) was used to assist in the generation and optimization of this code; it was not written entirely line-by-line by a human.
  • Safety Commitment: The author solemnly declares that there are absolutely no intentional backdoors, viruses, trojans, or malicious code designed to damage user equipment or steal data in this code.
  • Technical Limitations: Due to the author's personal technical level and ability limitations, there may be rudimentary issues in the code caused by loose logic, insufficient optimization, or lack of experience (including but not limited to memory leaks, occasional crashes, unreleased resources, etc.). These issues are purely due to insufficient ability and are not subjective intent.
  • Testing Scope: Due to the author's limited energy, comprehensive testing covering all edge scenarios has not been performed on this software.

3. Disclaimer (Important)

Please perform detailed and rigorous self-testing and verification before applying this code to any actual project (especially commercial, industrial, or critical mission environments). In view of the potential code defects and insufficient test coverage mentioned above, the author assumes no responsibility for any direct or indirect losses caused by the use of this code (including but not limited to equipment failure, data loss, system paralysis, or loss of profits). Once you start using this code, it indicates that you are aware of the above risks and agree to bear all consequences yourself; related issues have nothing to do with the author.

4. Open Source Scope

This project commits to fully open-sourcing the core logic code. However, the binary files, source code, or related resources of the "third-party libraries" mentioned above are not within the scope of this project's open-source obligation; please obtain them according to their respective guidelines.

5. Community and Feedback

Despite the aforementioned shortcomings, we still welcome everyone to download, use, submit Issues, or participate in testing to improve the project together. If you discover bugs, memory overflows, or have suggestions for improvement during use, please contact the author via the contact information provided on the project homepage, and we will do our best to assist within our limited time.

image-20250224211044113

About

Deploying deep learning models on multiple platforms (OpenVINO/ONNX Runtime, etc.)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages