AI Software Development Kit User Manual v1.4.1
AI Software Development Kit User Manual v1.4.1
Introduction 1
Safety notes 2
Installing AI Software
Development Kit 3
Using AI Software
Development Kit 4
AI Software Development Kit Guideline for writing
5
pipeline components
Operating Manual
04/2023
A5E52031285-AE
Legal information
Warning notice system
This manual contains notices you have to observe in order to ensure your personal safety, as well as to prevent
damage to property. The notices referring to your personal safety are highlighted in the manual by a safety alert
symbol, notices referring only to property damage have no safety alert symbol. These notices shown below are
graded according to the degree of danger.
DANGER
indicates that death or severe personal injury will result if proper precautions are not taken.
WARNING
indicates that death or severe personal injury may result if proper precautions are not taken.
CAUTION
indicates that minor personal injury can result if proper precautions are not taken.
NOTICE
indicates that property damage can result if proper precautions are not taken.
If more than one degree of danger is present, the warning notice representing the highest degree of danger will
be used. A notice warning of injury to persons with a safety alert symbol may also include a warning relating to
property damage.
Qualified Personnel
The product/system described in this documentation may be operated only by personnel qualified for the specific
task in accordance with the relevant documentation, in particular its warning notices and safety instructions.
Qualified personnel are those who, based on their training and experience, are capable of identifying risks and
avoiding potential hazards when working with these products/systems.
Proper use of Siemens products
Note the following:
WARNING
Siemens products may only be used for the applications described in the catalog and in the relevant technical
documentation. If products and components from other manufacturers are used, these must be recommended
or approved by Siemens. Proper transport, storage, installation, assembly, commissioning, operation and
maintenance are required to ensure that the products operate safely and without any problems. The permissible
ambient conditions must be complied with. The information in the relevant documentation must be observed.
Trademarks
All names identified by ® are registered trademarks of Siemens AG. The remaining trademarks in this publication
may be trademarks whose use by third parties for their own purposes could violate the rights of the owner.
Disclaimer of Liability
We have reviewed the contents of this publication to ensure consistency with the hardware and software
described. Since variance cannot be precluded entirely, we cannot guarantee full consistency. However, the
information in this publication is reviewed regularly and any necessary corrections are included in subsequent
editions.
1 Introduction ........................................................................................................................................... 5
1.1 Overview of Industrial Edge ................................................................................................. 5
1.2 AI@Edge .............................................................................................................................. 7
1.3 AI Software Development Kit functionalities ......................................................................... 8
1.4 Information about the software license ................................................................................ 8
2 Safety notes ........................................................................................................................................... 9
2.1 Security information ............................................................................................................ 9
2.2 Note on use ......................................................................................................................... 9
2.3 Note regarding the general data protection regulation ....................................................... 10
3 Installing AI Software Development Kit .............................................................................................. 11
3.1 Install and run.................................................................................................................... 11
4 Using AI Software Development Kit .................................................................................................... 14
4.1 Preparing data for training ................................................................................................. 14
4.2 Training models ................................................................................................................. 15
4.3 Packaging models into an inference pipeline ...................................................................... 16
4.4 Test the pipeline configuration package locally................................................................... 20
4.5 Mocking the logger of AI Inference Server .......................................................................... 24
4.6 Deploy the packaged inference pipeline for AI@Edge.......................................................... 24
4.7 Create delta package and deploy to AI@Edge ..................................................................... 24
5 Guideline for writing pipeline components ........................................................................................ 25
5.1 Component definition ........................................................................................................ 25
5.2 The entrypoint ................................................................................................................... 26
5.3 Input data .......................................................................................................................... 27
5.3.1 Variable types .................................................................................................................... 27
5.3.2 Restrictions on type Object................................................................................................. 28
5.3.3 Custom data formats ......................................................................................................... 28
5.4 Processing data.................................................................................................................. 30
5.5 Python dependencies ......................................................................................................... 30
5.6 File resources..................................................................................................................... 31
5.7 Returning the result ........................................................................................................... 32
5.8 Adding custom metrics ...................................................................................................... 33
5.9 Pipeline parameters ........................................................................................................... 34
1.2 AI@Edge
Siemens Industrial Edge Ecosystem is enabled with the SIMATIC AI Launcher products for
industrial AI. With SIMATIC AI Launcher, the scalable industrial edge ecosystem is extended by
AI capabilities that facilitate the provision of AI models in the production environment on the
shopfloor.
See also
Industrial Edge Homepage (https://new.siemens.com/global/en/products/automation/topic-
areas/industrial-edge.html)
AI@Edge Homepage (https://new.siemens.com/global/en/products/automation/topic-
areas/industrial-edge/production-machines.html)
Prerequisites
Before you begin, make sure you have internet access. If you reach the internet through a
proxy, such as if you are working in a corporate network directly or via VPN, please make sure
that you have configured the following tools to use the correct proxy.
• pip
• conda (If you also use conda)
Setting environment variables http_proxy and https_proxy covers both. A detailed
explanation about alternative solutions is provided in:
• Using a proxy server (https://pip.pypa.io/en/stable/user_guide/#using-a-proxy-server)
• Using Anaconda behind a company proxy (https://docs.anaconda.com/anaconda/user-
guide/tasks/proxy/)
# install packages required for the template including the AI SDK and ipykernel
pip install ipykernel -r requirements.txt -f <directory path
containing simaticai wheel>
Please note that you have to specify a path to the directory containing the AI SDK wheel, not
a path to the wheel itself.
Once the required packages are installed, you can explore and execute them in your
notebook editor.
Please make sure that you select the appropriate interactive Python kernel to execute the
notebooks, which is in this example: Python (state_identifier).
Note that by default, pip installs the newest available version of required packages that are
compatible with the AI SDK and the project template. If you want to make sure to use the
versions that are listed in Readme_OSS, you can apply the appropriate constraint during
installation as follows:
The building blocks help you create a time series pipeline that processes a stream of such
rows according to the following pattern:
To train the classifier in such a pipeline, the input data must be routed through the
preprocessing steps during the training process.
Therefore, this processing pipeline must be defined as part of data preparation before
training. This is where the building blocks in the State Identifier project template come into
play.
These building blocks are based on the widely used machine learning Python package
scikit-learn. Scikit-learn provides a framework for defining pipelines that allows you to
combine data transformers with classifiers or other kinds of estimators. The building blocks
are located in the src/pipeline.py file in the State Identifier project template. The main
ones are:
• WindowTransformer, which transforms a series of input rows into a series of windows
of rows
• FeatureTransformer, which transforms a window of rows into the feature values
according to user-defined functions.
In addition to these transformers, there is a transformer called FillMissingValues, which
performs input data correction for simple cases. For more advanced cases, you should use a
more sophisticated imputer to correct your input.
For more details and concrete examples, please refer to the training notebooks in project
template State Identifier. We also recommend to study the documentation of scikit-learn if
you would like to understand in-depth how scikit-learn works or if you want to implement
own transformers.
Alternatively, you can deploy the same pipeline split into two components:
To keep things simple and less error-prone, you should usually deploy your inference pipeline
with as few components as possible.
In many cases, a single component is sufficient. However, there may be reasons why you
should consider using separate components, such as:
• You need a different Python environment for different parts of your processing, e.g., you
have components that require conflicting package versions.
• You want to exploit parallelism between components without implementing
multithreading.
• You want to modularize your pipeline and build it from a pool of component variants that
you can flexibly combine.
AI Inference Server, which allows you to use the same relative references from the source
code to the stored models or other files.
Put together all the files for the components. Usually, it should be at least a Python script for
the entry point, the inference wrapper, and a saved model. Create the pipeline component by
running a Python script or notebook that provides the following functionality:
• creates a Python Component object with a specific name, component version, and
required Python version
• defines required Python packages
• defines input and output variables
• defines custom metrics
• defines the number of parallel executors
• adds Python scripts and saved models
• defines the entry point under the Python scripts
All this takes place with the corresponding functionality of the simaticai.deployment
module. For concrete examples, see the packaging notebooks in the project templates. For
more information and advanced options, see the AI SDK API reference manual.
Consider the following limitations:
• The AI SDK allows you to select a required Python version that is supported by different
versions of AI Inference Server.
• Make sure you select a Python version that is supported by the version installed on your
Industrial Edge target device. At the time of writing, this is Python version 3.8.
• The required Python packages must either be added as wheels to the pipeline component
or be available for download via pip for the target Inference Server.
• At the time of writing, there is a limitation that the entry point script must be in the root
folder of the package. This requirement can be relaxed with later versions of AI Inference
Server.
• AI Inference Server supports a maximum of 8 parallel executors.
In general, we recommend that you pass data from one component to another in a single
variable of type String and serialize and deserialize any data you have through a string.
The low-level methods of the Pipeline class allow you to define arbitrary wiring between
components and pipeline inputs and outputs, but the AI SDK cannot guarantee that the result
will behave on AI Inference Server as intended.
For information about pipeline input and output with different data types and defining
custom metrics, see the Guideline for writing runtime components (Page 25). It explains how
input and output data is passed between AI Inference Server and your entry point and
explains special considerations that apply for a continuous stream of time series data or for
bulk data.
Whether you created the pipeline with a single constructor call or with low-level methods,
you must save it as a final step. This creates the pipeline configuration package as a .zip file
and leaves the contents of the .zip file in the file system, which you can explore to
troubleshoot or see how your package creation calls are reflected in the contents of files and
directories.
Pipeline parameters
Advanced use cases might require that the behavior of the pipeline is modified after
deployment, for example by changing the parameters of the AI model. So AI SDK allows you
to define pipeline parameters.
In many respects, pipeline parameters are similar to pipeline inputs. But pipeline parameters
are handled separately and treated specially. Unlike input variables, pipeline parameters must
have a default value, which the parameter takes initially after deployment. Therefore, a
pipeline parameter's value is always defined.
Depending on the configuration, a pipeline parameter might be changed interactively
through the user interface of AI Inference Server or also be connected to an MQTT topic like
an input variable. In the latter case, the pipeline can receive parameter updates from other
system components through the External Databus.
The parameters defined for a pipeline apply to all components. This means that in a pipeline,
all components with parameters must be ready to receive parameter updates. A pipeline
component only needs to respect updates concerning parameters relevant for the given
component.
For details on how to define pipeline parameters and how to handle parameter updates in
the pipeline components, refer to Guideline for writing pipeline components (Page 25). For a
complete code example that shows how to define and use pipeline parameters, see the State
Identifier project template.
Parallel execution
By default pipeline components process the inputs sequentially, within the same Python
interpreter context. To increase the throughput of the component, you can instruct AI
Inference Server to run multiple instances of a pipeline component and distribute the inputs
among them. This way you can exploit the parallelism available in most multi-core CPUs.
If you specify parallel component execution, every instance is initialized separately and
receives only a fraction of the inputs. Therefore not all components are suitable for parallel
execution.
For example, the single component of the pipeline given in project template State Identifier
cannot be executed by parallel instances because the component must process inputs
sequentially, one by one in order to form windows from the data.
Theoretically, you could separate State Identifier into two components, the first component
forming the windows and the second component calculating the features and the prediction.
Then, the second component could be run in multiple parallel instances, as it does not have
to remember previous inputs to calculate the output. (It is another matter if this complexity is
worth the effort in a given use case.)
In contrast, the component given in project template Image Classification can be executed by
parallel instances out-of-the-box. It is practically stateless, as the key global state the
component uses is the model loaded during initialization. Otherwise, the component only
needs the current input to calculate the output.
Please note that with parallel component execution, there is no guarantee that the outputs
are produced in the same order as the corresponding inputs arrive. It might happen that one
instance overtakes another even if the raw CPU time required is roughly the same for all
inputs, as the component instances are competing for CPU cores with other applications
running on the Industrial Edge device.
You can predefine the number of parallel component instances using AI SDK function
PythonCompoent.set_parallel_steps(). This setting can be overridden on the user
interface of AI Inference Server.
The AI SDK package simaticai.testing provides two tools for local testing:
• A pipeline validator that performs static validation of the package for the availability of
required Python packages.
• A pipeline runner that allows you to simulate the execution of your pipeline in your
Python environment.
Note, that all of these testing features apply to pipeline configuration packages, not Edge
configuration packages. You must use it before you convert your pipeline configuration
package to an Edge configuration package using AI SDK.
Since the conversion itself is done automatically by AI Model Deployer, most of the potential
issues are already present in the package before the conversion, so a post-conversion
verification would only delay the identification of these issues.
You can also use the local pipeline runner to run your pipeline component by component.
You can feed individual components with inputs and verify the output produced.
If the pipeline contains parameters, the pipeline uses the default values for the parameters.
You can also change the parameter values using the update_parameters() method. In
this way you can test your pipeline with different parameters.
Note: You can only use update_parameters() method before calling run_component or
run_pipeline(), but you cannot change pipeline parameters while these methods are
running.
From a testing strategy and risk-based testing perspective, we recommend that you validate
the business logic within the pipeline components in unit tests as you would with any
ordinary Python program and use the local pipeline runner to cover test risks such as the
following:
• Mismatch between pipeline and component input and output variable names
• Required Python packages not covered by requirements.txt
• Source or other files are missing from the package
• Interface mismatch between subsequent pipeline components
• The entry point cannot process input data due to a mismatch in the data format
• Entry point generates output data in the wrong format
• For some reason, the pipeline does not work consistently as intended
A crucial point for making the local test faithful concerning data input and output formats is
to understand how data connections work in AI Inference Server. The following data
connection types are straightforward:
• Databus
• External Databus
• IE Vision Connector
For these data connection types, AI Inference Server passes the MQTT payload string directly
as the value of the connected pipeline input variable. In many use cases where you use this
data connection type, your pipeline has a single input variable of type string. This means that
you must pass local pipeline runner a Python dictionary with a single element.
For example, if you take the pipeline from the image classification project template, you have
a single input variable vision_payload. To run your pipeline on two consecutive input
images, you must call the pipeline runner as follows:
pipeline_input1 = { 'vision_payload': mqtt_payload1 }
pipeline_input2 = { 'vision_payload': mqtt_payload2 }
pipeline_output = runner.run_pipeline([pipeline_input1,
pipeline_input2])
For a complete code example that shows how to feed a pipeline with a single string input
variable in a local test, see the Local Pipeline Test Notebook in the Image Classification project
template.
The SIMATIC S7 Connector data connection type requires more attention. This connector is
typically used in time series use cases. Using this connection, the AI Inference Server
processes the MQTT payload used by the S7 Connector and only passes on the values of the
PLC variables, but not the metadata. So, if you intend to use your pipeline with the S7
connector, you need to feed it with dictionaries holding the PLC tag values.
Taking the pipeline from the State Identifier project template for example, you have input
variables ph1, ph2 and ph3, which are meant to be used with the SIMATIC S7 Connector data
connection type. To mimic how AI Inference Server feeds the pipeline, you must call the
pipeline runner like this:
pipeline_input1 = {'ph1': 4732.89, 'ph2': 4654.44, 'ph3': 4835.02}
pipeline_input2 = {'ph1': 4909.13, 'ph2': 4775.16, 'ph3': 4996.67}
pipeline_output = runner.run_pipeline([pipeline_input1,
pipeline_input2])
For a complete code example that shows how to feed a pipeline with an input line of PLC tag
values in a local test, see the Local Pipeline Test Notebook in the State Identifier project
template.
The reason for the required conversion is that while a pipeline configuration package fully
defines the inputs, outputs, and inner workings of an inference pipeline, it does not contain
all the components required to run in AI Inference Server. To make it complete for
deployment on AI Inference Server, the pipeline configuration package must be converted to
an edge configuration package.
Among other things, the conversion ensures that all the necessary Python packages for the
target platform are included. If any of the required packages, including transitive
dependencies, are not included in the pipeline configuration package for the target platform,
they will be downloaded from pypi.org.
The conversion function is available in AI SDK both as Python function and CLI command.
Please refer to the details of function convert_package in module
simaticai.deployment in the SDK API reference manual.
Example
The following code shows how to define component settings. The created configuration can
be checked in the pipeline-config.yml which can be found in the in Examples (Page 39)
section. Please note that this code is only used to create the pipeline configuration package,
but it is not contained in the package itself.
# create_pipeline_config_package.py
from simaticai import deployment
component.add_resources("../src","entrypoint.py")
component.set_entrypoint('entrypoint.py')
In this example the code uses a pre-trained scikit-learn model that is stored in a joblib file.
The file acts as a resource file in this model. For more details about resource files, see the File
resources section. The code also uses external Python modules (Page 30) that must be
deployed and installed on the AI Inference Server.
# adding stored scikit-learn model as a resource file
component.add_resources('..','models/classifier-model.joblib')
With this configuration, AI Inference Server collects data for input_1 and input_2. When
the data is ready, the server wraps it into a data payload and calls the process_input()
function from entrypoint.py. Once the data is processed and the class_label and
confidence results are calculated, the function generates a return value.
Example
# entrypoint.py
import sys
from pathlib import Path
# when you import from source, the parent folder of the module ('./src') must be added to the
system path
sys.path.insert(0, str(Path('./src').resolve()))
from my_module import data_processor # should be adapted to your code
In general, you need to define the types of input and output variables as AI Inference Server
types, but the Python script should use the appropriate Python type. The correspondence
between AI Inference Server data types and Python data types is displayed in the following
table. The table below also shows which data type is supported by which data connection in
AI Inference Server version 1.4.
Note
The structure of dictionaries received as pipeline input differs from the dictionary structure
required as component output. See details in section Processing images (Page 35).
Adding resources
For the configuration package to transport these files to the server environment, you must
specify them using the add_resources(base_dir, resources) method, as shown
below:
# the method adds 'prediction_model.joblib' from the '../models' directory file to the
component
# and the file will be extracted on the server into the component folder under 'models'
directory
component.add_resources(base_dir="..",
resources="models/prediction_model.joblib")
# same way we define a file 'model-config.yml' to bring into 'config' directory
component.add_resources(base_dir="..", resources="config/model-
config.yml")
Once the pipeline is imported into AI Inference Server and the component is installed, the
files in the server's file system are available in the component directory and can be accessed
by the Python scripts:
# data_processor.py
import yaml
import joblib
from pathlib import Path
# Our goal is to have identical relative path to the resources in the source repository and on
the server.
base_dir = Path(__file__).parents[1]
# file 'model-config.yml' is extracted into 'config' directory
config_path = base_dir / "config/model-config.yml"
model_config = yaml.load(config_path)
# file 'prediction_model.joblib' is extracted into 'models' directory
model_path = base_dir / "models/prediction_model.joblib"
with open(model_path, "rb") as model_file:
model = joblib.load(model_file)
As loading files can be time consuming, it is recommended to load files and ML models into
memory at initialization time of your Python code and not during the call to
Note
You cannot pass on a dictionary received as pipeline input as a component output because
the structure of these dictionaries is different.
In the receiver pipeline component, you can decode the image as follows:
# define input
component.add_input("processed_image", "Object")
# construct PIL Object from metadata and binary data
def process_input(data: dict):
metadata = json.loads(data['processed_image']['metadata'])
image_data = data['processed_image']['bytes']
mode = metadata['mode']
width = metadata['width']
height = metadata['height']
image = Image.frombytes(mode, (width, height), image_data)
return {
"prediction": prediction,
"metric_name": json.dumps({"values": metric_value}),
}
Once the pipeline is created, it will collect the metrics from all components and provide as
pipeline outputs, so the AI Inference Server can treat it as an output. The custom metric will
be visible on AI Inference Server as an output with a preconfigured topic which needs to be
connected to Databus.
Please note that you can also add custom metrics to a monitoring component provided by the
AI SDK Monitoring Extension. Refer to the AI SDK Monitoring Extension User Manual for more
details.
As a result, AI Inference Server allows you to map this parameter to an MQTT topic, which is
similar to the method how input and output variables are mapped. In the handler function,
the parameters can be retrieved by first unfolding the JSON in the single formal pipeline
parameter into a dictionary, after which they can be accessed individually.
# entrypoint.py
def update_parameters(params: dict):
windowing = json.loads(params['windowing'])
windowSize = windowing['windowSize']
windowStepSize = windowing['stepSize']
width = image_data['resolutionWidth']
height = image_data['resolutionHeight']
# image received with 'BGR' byte order
return Image.frombytes('RGB', (width, height), image_data['image'],
'raw', 'BGR')
Ideally, each signal is a variable that is read by the Industrial Edge data bus, as configured in
AI Inference Server. Often, the signals from PLCs are captured using the Industrial Edge S7
Connector, which samples these signals from given PLC tags and makes them available on
the data bus.
Unfortunately, data points for the signals do not necessarily arrive at a regular rate or
synchronously. So, by default, your Python script is usually called with a single variable filled
in and the others are None, which is not good for an ML model that expects an entire matrix
of multiple variables in an entire time window.
To ensure the regular rate and the synchronicity of inputs, AI Inference Server supports inter
signal alignment. You can specify a time interval and receive the inputs for all variables
stimulated in that interval. In our example, you would specify 5 seconds as time interval and
receive inputs like:
As there is no guarantee that the data source delivers a data point in each interval, there is
still a possibility that some values are missing and set to None in an input row. However, if
the sample rate of inter signal alignment does not exceed the data rate of the sources, your
Python script will mostly be passed complete rows of data.
For details on inter signal alignment, refer to the user manual of AI Inference Server user's
manual.
See the packaging notebook in project template State Identifier and the AI SDK API reference
to specify inter signal alignment to be applied to the input of an ML pipeline when it is
packaged for deployment to the AI Inference Server.
However, rows are not yet enough to feed time series ML models when they need data
windows consisting of multiple rows. Since the AI Inference Server does not support
accumulating windows, the Python script must take over this task. The key point of the
server's script interface here is that not all inputs in your Python script results in an output,
because the ML model can only produce output if the received input has just completed a
data window that can be passed to the model to calculate an output.
As described earlier in Returning the result (Page 32), the script can return None while it
accumulates input, and the model cannot calculate a value for the output. For reasons of
compactness, the following diagram shows this for a window size of two.
In real life, this can be even more complex, depending on whether the windows are
overlapping or disjoint. For a concrete implementation of such accumulation logic, see the
Python script provided in the State Identifier project template.
Please note that you cannot use parallel component execution if your component relies on
building up windows from subsequent data points, as the data points would be distributed to
different instances of the component. You can, however, separate the aggregation of data
windows and CPU intensive processing of the windows into an own component each, and
enable parallel execution for the latter only.
Please also note, however, that even in this case there is no guarantee that the processing of
the windows finishes in the original sequence of the data. If that is essential, you should
supply the windows with a sequence id in the aggregating component, which you pass on to
the output of the processing component. That way the consumer of the pipeline can recreate
the original sequence.
5.11 Examples
The above code generates a pipeline-config.yml that contains, among others, the
following:
# pipeline-config.yml
components:
name: classifier
entrypoint: entrypoint.py
version: 1.0.0
runtime:
type: python
version: 3.8
inputType:
- name: input_1
type: Double
- name: input_2
type: Double
outputType:
- name: class_label
type: Integer
- name: confidence
type: Double
Image Classification
The following script creates an image classification pipeline that consists of a single
component. The pipeline processes images embedded in JSON strings and produces a
classification result as a string. This example is detailed in the Image Classification project
template.
from simaticai import deployment
# create pipeline component and define basic properties
component = deployment.PythonComponent(name='inference',
version='1.0.0', python_version='3.8')
component.add_input('vision_payload', 'String') # define single input
variable
component.add_output('prediction', 'String') # define single output variable
component.add_resources('..', 'entrypoint.py') # add Python script
component.set_entrypoint('entrypoint.py') # define above script as entrypoint
component.add_resources('..', 'src/vision_classifier_tflite.py') #
add classifier script used by entrypoint
component.set_requirements("../runtime_requirements_tflite.txt") #
define required Python packages
component.add_resources('..',
'models/classification_mobilnet.tflite') # add saved model used in classifier
component.set_parallel_steps(2) # set the number of parallel executors
# create and save pipeline consisting of single component
pipeline = deployment.Pipeline.from_components([component],
name='Image_TFLite_package', version='1.0.0')
pipeline_package_path = pipeline.save('../packages')
# convert pipeline configuration package to edge configuration package
deployment.convert_package(pipeline_package_path)
The above code generates a pipeline_config.yml that contains, among other things:
dataFlowPipeline:
components:
- entrypoint: ./entrypoint.py
inputType:
- name: vision_payload
type: String
name: inference
outputType:
- name: prediction
type: String
runtime:
type: python
version: '3.8'
version: 1.0.0
pipelineDag:
- source: Databus.vision_payload
target: inference.vision_payload
- source: inference.prediction
target: Databus.prediction
pipelineInputs:
- name: vision_payload
type: String
pipelineOutputs:
- name: prediction
type: String
dataFlowPipelineInfo:
dataFlowPipelineVersion: 1.0.0
projectName: Image_TFLite_package
The saved pipeline configuration package contains the files listed below. The main folder
contains the YAML files that describe the pipeline. The inference subfolder contains the
files that belong to this component.
Image_TFLite_package_1.0.0/pipeline_config.yml
Image_TFLite_package_1.0.0/datalink_metadata.yml
Image_TFLite_package_1.0.0/inference/entrypoint.py
Image_TFLite_package_1.0.0/inference/requirements.txt
Image_TFLite_package_1.0.0/inference/src/vision_classifier_tflite.py
Image_TFLite_package_1.0.0/inference/models/classification_mobilnet.
tflite
result = process_input(input_data)
if result is None:
answer = {"ready": False, "output": None}
else:
answer = {"ready": True, "output": json.dumps(result)}
return answer