Observability
Spring AI builds upon the observability features in the Spring ecosystem to provide insights into AI-related operations.
Spring AI provides metrics and tracing capabilities for its core components: ChatClient
(including Advisor
),
ChatModel
, EmbeddingModel
, ImageModel
, and VectorStore
.
Low cardinality keys will be added to metrics and traces, while high cardinality keys will only be added to traces. |
1.0.0-RC1 Breaking Changes Following configuration properties have been renamed to better reflect their purpose:
|
Chat Client
The spring.ai.chat.client
observations are recorded when a ChatClient call()
or stream()
operations are invoked.
They measure the time spent performing the invocation and propagate the related tracing information.
Name | Description |
---|---|
|
Always |
|
Always |
|
Is the chat model response a stream - |
|
The kind of framework API in Spring AI: |
Name | Description |
---|---|
|
The content of the prompt sent via the chat client. Optional. |
|
Map of advisor parameters. The conversation ID is now included in |
|
List of configured chat client advisors. |
|
Identifier of the conversation when using the chat memory. |
|
Chat client system parameters. Optional. Superseded by |
|
Chat client system text. Optional. Superseded by |
|
Enabled tool function names. Superseded by |
|
List of configured chat client function callbacks. Superseded by |
|
Names of the tools passed to the chat client. |
|
Chat client user parameters. Optional. Superseded by |
|
Chat client user text. Optional. Superseded by |
Prompt Content
The ChatClient
prompt content is typically big and possibly containing sensitive information.
For those reasons, it is not exported by default.
Spring AI supports logging the prompt content to help with debugging and troubleshooting.
Property | Description | Default |
---|---|---|
|
Whether to log the chat client prompt content. |
|
If you enable logging of the chat client prompt content, there’s a risk of exposing sensitive or private information. Please, be careful! |
Input Data (Deprecated)
The spring.ai.chat.client.observations.include-input property is deprecated, replaced by spring.ai.chat.client.observations.log-prompt . See Prompt Content.
|
The ChatClient
input data is typically big and possibly containing sensitive information.
For those reasons, it is not exported by default.
Spring AI supports logging input data to help with debugging and troubleshooting.
Property | Description | Default |
---|---|---|
|
Whether to include the input content in the observations. |
|
If you enable the inclusion of the input content in the observations, there’s a risk of exposing sensitive or private information. Please, be careful! |
Chat Client Advisors
The spring.ai.advisor
observations are recorded when an advisor is executed.
They measure the time spent in the advisor (including the time spend on the inner advisors) and propagate the related tracing information.
Name | Description |
---|---|
|
Always |
|
Always |
|
Where the advisor applies it’s logic in the request processing, one of |
|
The kind of framework API in Spring AI: |
Name | Description |
---|---|
|
Name of the advisor. |
|
Advisor order in the advisor chain. |
Chat Model
Observability features are currently supported only for ChatModel implementations from the following AI model
providers: Anthropic, Azure OpenAI, Mistral AI, Ollama, OpenAI, Vertex AI, MiniMax, Moonshot, QianFan, Zhiu AI.
Additional AI model providers will be supported in a future release.
|
The gen_ai.client.operation
observations are recorded when calling the ChatModel call
or stream
methods.
They measure the time spent on method completion and propagate the related tracing information.
The gen_ai.client.token.usage metrics measures number of input and output tokens used by a single model call.
|
Name | Description |
---|---|
|
The name of the operation being performed. |
|
The model provider as identified by the client instrumentation. |
|
The name of the model a request is being made to. |
|
The name of the model that generated the response. |
Name | Description |
---|---|
|
The frequency penalty setting for the model request. |
|
The maximum number of tokens the model generates for a request. |
|
The presence penalty setting for the model request. |
|
List of sequences that the model will use to stop generating further tokens. |
|
The temperature setting for the model request. |
|
The top_k sampling setting for the model request. |
|
The top_p sampling setting for the model request. |
|
Reasons the model stopped generating tokens, corresponding to each generation received. |
|
The unique identifier for the AI response. |
|
The number of tokens used in the model input (prompt). |
|
The number of tokens used in the model output (completion). |
|
The total number of tokens used in the model exchange. |
|
The full prompt sent to the model. Optional. |
|
The full response received from the model. Optional. |
|
List of tool definitions provided to the model in the request. |
For measuring user tokens, the previous table lists the values present in an observation trace.
Use the metric name gen_ai.client.token.usage that is provided by the ChatModel .
|
Chat Prompt and Completion Data
The chat prompt and completion data is typically big and possibly containing sensitive information. For those reasons, it is not exported by default.
Spring AI supports logging chat prompt and completion data, useful for troubleshooting scenarios. When tracing is available, the logs will include trace information for better correlation.
Property | Description | Default |
---|---|---|
|
Log the prompt content. |
|
|
Log the completion content. |
|
|
Include error logging in observations. |
|
If you enable logging of the chat prompt and completion data, there’s a risk of exposing sensitive or private information. Please, be careful! |
Tool Calling
The spring.ai.tool
observations are recorded when performing tool calling in the context of a chat model interaction. They measure the time spent on toll call completion and propagate the related tracing information.
Name | Description |
---|---|
|
The name of the operation being performed. It’s always |
|
The provider responsible for the operation. It’s always |
|
The kind of operation performed by Spring AI. It’s always |
|
The name of the tool. |
Name |
Description |
|
Description of the tool. |
|
Schema of the parameters used to call the tool. |
|
The input arguments to the tool call. (Only when enabled) |
|
Schema of the parameters used to call the tool. (Only when enabled) |
Tool Call Arguments and Result Data
The input arguments and result from the tool call are not exported by default, as they can be potentially sensitive.
Spring AI supports exporting tool call arguments and result data as span attributes.
Property | Description | Default |
---|---|---|
|
Include the tool call content in observations. |
|
If you enable the inclusion of the tool call arguments and result in the observations, there’s a risk of exposing sensitive or private information. Please, be careful! |
EmbeddingModel
Observability features are currently supported only for EmbeddingModel implementations from the following
AI model providers: Azure OpenAI, Mistral AI, Ollama, and OpenAI.
Additional AI model providers will be supported in a future release.
|
The gen_ai.client.operation
observations are recorded on embedding model method calls.
They measure the time spent on method completion and propagate the related tracing information.
The gen_ai.client.token.usage metrics measures number of input and output tokens used by a single model call.
|
Name | Description |
---|---|
|
The name of the operation being performed. |
|
The model provider as identified by the client instrumentation. |
|
The name of the model a request is being made to. |
|
The name of the model that generated the response. |
Name | Description |
---|---|
|
The number of dimensions the resulting output embeddings have. |
|
The number of tokens used in the model input. |
|
The total number of tokens used in the model exchange. |
For measuring user tokens, the previous table lists the values present in an observation trace.
Use the metric name gen_ai.client.token.usage that is provided by the EmbeddingModel .
|
Image Model
Observability features are currently supported only for ImageModel implementations from the following AI model
providers: OpenAI.
Additional AI model providers will be supported in a future release.
|
The gen_ai.client.operation
observations are recorded on image model method calls.
They measure the time spent on method completion and propagate the related tracing information.
The gen_ai.client.token.usage metrics measures number of input and output tokens used by a single model call.
|
Name | Description |
---|---|
|
The name of the operation being performed. |
|
The model provider as identified by the client instrumentation. |
|
The name of the model a request is being made to. |
Name | Description |
---|---|
|
The format in which the generated image is returned. |
|
The size of the image to generate. |
|
The style of the image to generate. |
|
The unique identifier for the AI response. |
|
The name of the model that generated the response. |
|
The number of tokens used in the model input (prompt). |
|
The number of tokens used in the model output (generation). |
|
The total number of tokens used in the model exchange. |
|
The full prompt sent to the model. Optional. |
For measuring user tokens, the previous table lists the values present in an observation trace.
Use the metric name gen_ai.client.token.usage that is provided by the ImageModel .
|
Image Prompt Data
The image prompt data is typically big and possibly containing sensitive information. For those reasons, it is not exported by default.
Spring AI supports logging image prompt data, useful for troubleshooting scenarios. When tracing is available, the logs will include trace information for better correlation.
Property | Description | Default |
---|---|---|
|
Log the image prompt content. |
|
If you enable logging of the image prompt data, there’s a risk of exposing sensitive or private information. Please, be careful! |
Vector Stores
All vector store implementations in Spring AI are instrumented to provide metrics and distributed tracing data through Micrometer.
The db.vector.client.operation
observations are recorded when interacting with the Vector Store.
They measure the time spent on the query
, add
and remove
operations and propagate the related tracing information.
Name | Description |
---|---|
|
The name of the operation or command being executed. One of |
|
The database management system (DBMS) product as identified by the client instrumentation. One of |
|
The kind of framework API in Spring AI: |
Name | Description |
---|---|
|
The name of a collection (table, container) within the database. |
|
The name of the database, fully qualified within the server address and port. |
|
The record identifier if present. |
|
The metric used in similarity search. |
|
The dimension of the vector. |
|
The name field as of the vector (e.g. a field name). |
|
The content of the search query being executed. |
|
The metadata filters used in the search query. |
|
Returned documents from a similarity search query. Optional. |
|
Similarity threshold that accepts all search scores. A threshold value of 0.0 means any similarity is accepted or disable the similarity threshold filtering. A threshold value of 1.0 means an exact match is required. |
|
The top-k most similar vectors returned by a query. |
Response Data
The vector search response data is typically big and possibly containing sensitive information. For those reasons, it is not exported by default.
Spring AI supports logging vector search response data, useful for troubleshooting scenarios. When tracing is available, the logs will include trace information for better correlation.
Property | Description | Default |
---|---|---|
|
Log the vector store query response content. |
|
If you enable logging of the vector search response data, there’s a risk of exposing sensitive or private information. Please, be careful! |