Abstract
The Inference Server User Guide provides a detailed overview about the Inference Server. This guide also provides documentation on the Inference Server model store and Inference API. This is a Beta release for early testing and feedback.
1. Overview Of The Inference Server
- Multiple model support
- The server can manage any number and mix of models (limited by system disk and memory resources). Supports TensorRT and TensorFlow GraphDef model formats.
- Multi-GPU support
- The server can distribute inferencing across all system GPUs.
- Multi-tenancy support
- Multiple models (or multiple instances of the same model) can run simultaneously on the same GPU.
- Batching support
The Inference Server itself is provided as a pre-built container. External to the server, API schemas, C++ and Python client libraries, and related documentation are provided in source at: GitHub Inference Server.
Contents Of The Inference Server Container
This image contains the inference server in /opt/inference_server. The executable is /opt/inference_server/bin/inference_server.
2. Pulling The Inference Server Container
You can pull (download) an NVIDIA container that is already built, tested, tuned, and ready to run. Each NVIDIA deep learning container includes the code required to build the framework so that you can make changes to the internals. The containers do not contain sample data-sets or sample model definitions unless they are included with the source for the framework.
Currently, you can access NVIDIA GPU accelerated containers in one of two ways depending upon where you doing your training. If you own a DGX-1™ or a DGX Station™ , then you should use the NVIDIA® DGX™ container registry located at https://compute.nvidia.com. You can pull the containers from there and you can also push containers there into your own account on the nvidia-docker repository, nvcr.io.
Before you can pull a container you must have Docker and nvidia-docker installed as explained in Preparing to use NVIDIA Containers Getting Started Guide. You must also have access and logged into the NGC container registry as explained in NGC Getting Started Guide.
For step-by-step instructions, see Container User Guide.
3. Running The Inference Server Container
$ nvidia-docker run --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 --mount type=bind,source=/path/to/model/store,target=/tmp/models <container> /opt/inference_server/bin/inference_server --model_base_path=/tmp/modelsWhere <container> is the name of the docker container that was pulled from the NVIDIA DGX or NGC container registry as described in Pulling The Inference Server Container.
The nvidia-docker --mount option maps /path/to/model/store on the host into the container at /tmp/models, and the --model_base_path option to the Inference Server is used to point to /tmp/models as the model store.
The Inference Server listens on port 8000 and the above command uses the -p flag to map container port 8000 to host port 8000. A different host port can be used by modifying the -p flag, for example -p9000:8000 will cause the Inference Server to be available on host port 9000.
The --shm-size and --ulimit flags are recommended to improve Inference Server performance. For --shm-size the minimum recommended size if 1g but larger sizes may be necessary depending on the number and size of models being served.
Starting server listening on :8000
4. Verifying The Inference Server
$ curl localhost:8000/api/status version: "18.04" model_status { key: "resnet50" value { config { name: "resnet50" model_platform: "tensorflow_graphdef" max_batch_size: 128 input { name: "input" data_type: TYPE_FP32 format: FORMAT_NHWC dims: 224 dims: 224 dims: 3 } output { name: "output" data_type: TYPE_FP32 dims: 1000 label_filename: "resnet50_labels.txt" } ...
5. Model Store
<model_base_path>/ model_0/ config.pbtxt output0_labels.txt 1/ model.plan 2/ model.plan model_1/ config.pbtxt output0_labels.txt output1_labels.txt 3/ model.graphdef model_2/ … model_n/
Any number of models may be specified. The name of the model directory (for example, model_0, model_1) must match the name of the model specified in the required configuration file, config.pbtxt. This model name is used in the client and server APIs to identify the model. Each model directory must have at least one numeric subdirectory (for example, model_0/1). Each of these subdirectories holds a version of the model with the version number corresponding to the directory name. Within the version directory is the model definition file. The name must be model.plan for TensorRT models, and model.graphdef for TensorFlow GraphDef models.
The configuration file, config.pbtxt, for each model must be protobuf text adhering to the ModelConfig schema defined and explained below. The *_labels.txt files are optional and are used to provide labels for outputs that represent classifications.
5.1. Model Configuration Schema
Each model in the model store must include a file called config.pbtxt that contains the configuration information for the model. The model configuration must be specified as protobuf text using the ModelConfig schema described at GitHub: Inference Server model_config.proto.
name: "trt_mnist" model_platform: "tensorrt_plan" max_batch_size: 8 input [ { name: "data" data_type: TYPE_FP32 format: FORMAT_NCHW dims: [ 1, 28, 28 ] } ] output [ { name: "prob" data_type: TYPE_FP32 dims: [ 10, 1, 1 ] label_filename: "mnist_labels.txt" } ] instance [ { gpus: [ 0 ] }, { gpus: [ 0 ] } ]
name: "resnet50" model_platform: "tensorflow_graphdef" max_batch_size: 128 input [ { name: "input" data_type: TYPE_FP32 format: FORMAT_NHWC dims: [ 224, 224, 3 ] } ] output [ { name: "output" data_type: TYPE_FP32 dims: [ 1000 ] } ] instance [ { gpus: [ 0 ] }, { gpus: [ 1 ] } ]
6. Inference Server API
- /api/status
- The server status API for getting information about the server and about the models being served.
- /api/infer
- The inference API that accepts model inputs, runs inference and returns the requested outputs.
The HTTP endpoints can be used directly as described in this section, but for most use-cases, the preferred way to access the Inference Server is via the C++ and Python client API libraries. The libraries are available at GitHub: Inference Server.
6.1. Server Status API
Performing an HTTP GET to /api/status returns status information about the server and all the models being served. Performing an HTTP GET to /api/status/<model name> returns information about the server and the single model specified by <model name>. An example is shown in Verifying The Inference Server.
The server status is returned in the HTTP response body in either text format (the default) or in binary format if query parameter format=binary is specified (for example, /api/status?format=binary). The status schema is defined by the protobuf schema given in server_status.proto defined at GitHub: Inference Server server_status.proto.
NV-Status: code: SUCCESS
NV-Status: code: NOT_FOUND msg: "no status available for unknown model \'x\'"
6.2. Infer
NV-InferRequest: batch_size: 1 input { name: "input" byte_size: 602112 } output { name: "output" byte_size: 4000 cls { count: 3 } }
The input tensor values are communicated in the body of the HTTP POST request as raw binary in the order as the inputs are listed in the request header.
<raw binary tensor values for output0, if raw output was requested for output0> <raw binary tensor values for output1, if raw output was requested for output1> ... <raw binary tensor values for outputn, if raw output was requested for outputn> <binary encoded InferResponseHeader proto>
NV-Status: code: SUCCESS
NV-Status: code: NOT_FOUND msg: "no status available for unknown model \'x\'"
7. Support
For questions, view bug reports and ask for feature requests for the Inference Server, create your ask in the Inference Server issue tracker at GitHub: Inference Server Issues.
Notices
Notice
THE INFORMATION IN THIS GUIDE AND ALL OTHER INFORMATION CONTAINED IN NVIDIA DOCUMENTATION REFERENCED IN THIS GUIDE IS PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE INFORMATION FOR THE PRODUCT, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the product described in this guide shall be limited in accordance with the NVIDIA terms and conditions of sale for the product.
THE NVIDIA PRODUCT DESCRIBED IN THIS GUIDE IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED OR INTENDED FOR USE IN CONNECTION WITH THE DESIGN, CONSTRUCTION, MAINTENANCE, AND/OR OPERATION OF ANY SYSTEM WHERE THE USE OR A FAILURE OF SUCH SYSTEM COULD RESULT IN A SITUATION THAT THREATENS THE SAFETY OF HUMAN LIFE OR SEVERE PHYSICAL HARM OR PROPERTY DAMAGE (INCLUDING, FOR EXAMPLE, USE IN CONNECTION WITH ANY NUCLEAR, AVIONICS, LIFE SUPPORT OR OTHER LIFE CRITICAL APPLICATION). NVIDIA EXPRESSLY DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY OF FITNESS FOR SUCH HIGH RISK USES. NVIDIA SHALL NOT BE LIABLE TO CUSTOMER OR ANY THIRD PARTY, IN WHOLE OR IN PART, FOR ANY CLAIMS OR DAMAGES ARISING FROM SUCH HIGH RISK USES.
NVIDIA makes no representation or warranty that the product described in this guide will be suitable for any specified use without further testing or modification. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to ensure the product is suitable and fit for the application planned by customer and to do the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this guide. NVIDIA does not accept any liability related to any default, damage, costs or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this guide, or (ii) customer product designs.
Other than the right for customer to use the information in this guide with the product, no other license, either expressed or implied, is hereby granted by NVIDIA under this guide. Reproduction of information in this guide is permissible only if reproduction is approved by NVIDIA in writing, is reproduced without alteration, and is accompanied by all associated conditions, limitations, and notices.
Trademarks
NVIDIA, the NVIDIA logo, and cuBLAS, CUDA, cuDNN, cuFFT, cuSPARSE, DIGITS, DGX, DGX-1, DGX Station, GRID, Jetson, Kepler, NVIDIA GPU Cloud, Maxwell, NCCL, NVLink, Pascal, Tegra, TensorRT, Tesla and Volta are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.