Elasticsearch API

Base URL
http://api.example.com

Elasticsearch provides REST APIs that are used by the UI components and can be called directly to configure and access Elasticsearch features.

Documentation source and versions

This documentation is derived from the main branch of the elasticsearch-specification repository. It is provided under license Attribution-NonCommercial-NoDerivatives 4.0 International. This documentation contains work-in-progress information for future Elastic Stack releases.

Last update on Aug 20, 2025.

This API is provided under license Apache 2.0.

Authentication

The API accepts 3 different authentication methods:

Api key auth (http_api_key)

Elasticsearch APIs support key-based authentication. You must create an API key and use the encoded value in the request header. For example:

curl -X GET "${ES_URL}/_cat/indices?v=true" \
  -H "Authorization: ApiKey ${API_KEY}"

To get API keys, use the /_security/api_key APIs.

Basic auth (http)

Basic auth tokens are constructed with the Basic keyword, followed by a space, followed by a base64-encoded string of your username:password (separated by a : colon).

Example: send a Authorization: Basic aGVsbG86aGVsbG8= HTTP header with your requests to authenticate with the API.

Bearer auth (http)

Elasticsearch APIs support the use of bearer tokens in the Authorization HTTP header to authenticate with the API. For examples, refer to Token-based authentication services

Autoscaling

The autoscaling APIs enable you to create and manage autoscaling policies and retrieve information about autoscaling capacity. Autoscaling adjusts resources based on demand. A deployment can use autoscaling to scale resources as needed, ensuring sufficient capacity to meet workload requirements.

Get an autoscaling policy Generally available; Added in 7.11.0

GET /_autoscaling/policy/{name}

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

External documentation

Path parameters

  • name string Required

    the name of the autoscaling policy

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • roles array[string] Required
    • deciders object Required

      Decider settings.

      External documentation
      Hide deciders attribute Show deciders attribute object
      • * object Additional properties
GET /_autoscaling/policy/my_autoscaling_policy
resp = client.autoscaling.get_autoscaling_policy(
    name="my_autoscaling_policy",
)
const response = await client.autoscaling.getAutoscalingPolicy({
  name: "my_autoscaling_policy",
});
response = client.autoscaling.get_autoscaling_policy(
  name: "my_autoscaling_policy"
)
$resp = $client->autoscaling()->getAutoscalingPolicy([
    "name" => "my_autoscaling_policy",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_autoscaling/policy/my_autoscaling_policy"
client.autoscaling().getAutoscalingPolicy(g -> g
    .name("my_autoscaling_policy")
);
Response examples (200)
This may be a response to `GET /_autoscaling/policy/my_autoscaling_policy`.
{
   "roles": <roles>,
   "deciders": <deciders>
}

Create or update an autoscaling policy Generally available; Added in 7.11.0

PUT /_autoscaling/policy/{name}

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

External documentation

Path parameters

  • name string Required

    the name of the autoscaling policy

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body Required

  • roles array[string] Required
  • deciders object Required

    Decider settings.

    External documentation
    Hide deciders attribute Show deciders attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_autoscaling/policy/<name>
{
  "roles": [],
  "deciders": {
    "fixed": {
    }
  }
}
resp = client.autoscaling.put_autoscaling_policy(
    name="<name>",
    policy={
        "roles": [],
        "deciders": {
            "fixed": {}
        }
    },
)
const response = await client.autoscaling.putAutoscalingPolicy({
  name: "<name>",
  policy: {
    roles: [],
    deciders: {
      fixed: {},
    },
  },
});
response = client.autoscaling.put_autoscaling_policy(
  name: "<name>",
  body: {
    "roles": [],
    "deciders": {
      "fixed": {}
    }
  }
)
$resp = $client->autoscaling()->putAutoscalingPolicy([
    "name" => "<name>",
    "body" => [
        "roles" => array(
        ),
        "deciders" => [
            "fixed" => new ArrayObject([]),
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"roles":[],"deciders":{"fixed":{}}}' "$ELASTICSEARCH_URL/_autoscaling/policy/<name>"
client.autoscaling().putAutoscalingPolicy(p -> p
    .name("<name>")
    .policy(po -> po
        .deciders("fixed", JsonData.fromJson("{}"))
    )
);
Request examples
{
  "roles": [],
  "deciders": {
    "fixed": {
    }
  }
}
The API method and path for this request: `PUT /_autoscaling/policy/my_autoscaling_policy`. It creates `my_autoscaling_policy` using the fixed autoscaling decider, applying to the set of nodes having (only) the `data_hot` role.
{
  "roles" : [ "data_hot" ],
  "deciders": {
    "fixed": {
    }
  }
}
Response examples (200)
{
  "acknowledged": true
}

Delete an autoscaling policy Generally available; Added in 7.11.0

DELETE /_autoscaling/policy/{name}

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

External documentation

Path parameters

  • name string Required

    the name of the autoscaling policy

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_autoscaling/policy/{name}
DELETE /_autoscaling/policy/*
resp = client.autoscaling.delete_autoscaling_policy(
    name="*",
)
const response = await client.autoscaling.deleteAutoscalingPolicy({
  name: "*",
});
response = client.autoscaling.delete_autoscaling_policy(
  name: "*"
)
$resp = $client->autoscaling()->deleteAutoscalingPolicy([
    "name" => "*",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_autoscaling/policy/*"
client.autoscaling().deleteAutoscalingPolicy(d -> d
    .name("*")
);
Response examples (200)
This may be a response to either `DELETE /_autoscaling/policy/my_autoscaling_policy` or `DELETE /_autoscaling/policy/*`.
{
  "acknowledged": true
}

Get the autoscaling capacity Generally available; Added in 7.11.0

GET /_autoscaling/capacity

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

This API gets the current autoscaling capacity based on the configured autoscaling policy. It will return information to size the cluster appropriately to the current workload.

The required_capacity is calculated as the maximum of the required_capacity result of all individual deciders that are enabled for the policy.

The operator should verify that the current_nodes match the operator’s knowledge of the cluster to avoid making autoscaling decisions based on stale or incomplete information.

The response contains decider-specific information you can use to diagnose how and why autoscaling determined a certain capacity was required. This information is provided for diagnosis only. Do not use this information to make autoscaling decisions.

External documentation

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • policies object Required
      Hide policies attribute Show policies attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • required_capacity object Required
          Hide required_capacity attributes Show required_capacity attributes object
          • node object Required
            Hide node attributes Show node attributes object
            • storage number Required
            • memory number Required
          • total object Required
            Hide total attributes Show total attributes object
            • storage number Required
            • memory number Required
        • current_capacity object Required
          Hide current_capacity attributes Show current_capacity attributes object
          • node object Required
            Hide node attributes Show node attributes object
            • storage number Required
            • memory number Required
          • total object Required
            Hide total attributes Show total attributes object
            • storage number Required
            • memory number Required
        • current_nodes array[object] Required
          Hide current_nodes attribute Show current_nodes attribute object
          • name string Required
        • deciders object Required
          Hide deciders attribute Show deciders attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • required_capacity object Required
              Hide required_capacity attributes Show required_capacity attributes object
              • node object Required
              • total object Required
            • reason_summary string
            • reason_details object
GET /_autoscaling/capacity
resp = client.autoscaling.get_autoscaling_capacity()
const response = await client.autoscaling.getAutoscalingCapacity();
response = client.autoscaling.get_autoscaling_capacity
$resp = $client->autoscaling()->getAutoscalingCapacity();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_autoscaling/capacity"
client.autoscaling().getAutoscalingCapacity(g -> g);
Response examples (200)
This may be a response to `GET /_autoscaling/capacity`.
{
  policies: {}
}

Behavioral analytics

The behavioral analytics APIs let you create and manage analytics collections and view their data. Use them to analyze users’ search and click behavior, improve result relevance, and identify content gaps.

Get behavioral analytics collections Deprecated Technical preview; Added in 8.8.0

GET /_application/analytics/{name}

All methods and paths for this operation:

GET /_application/analytics

GET /_application/analytics/{name}

Path parameters

  • name array[string] Required

    A list of analytics collections to limit the returned information

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • event_data_stream object Required
        Hide event_data_stream attribute Show event_data_stream attribute object
        • name string Required
GET /_application/analytics/{name}
GET _application/analytics/my*
resp = client.search_application.get_behavioral_analytics(
    name="my*",
)
const response = await client.searchApplication.getBehavioralAnalytics({
  name: "my*",
});
response = client.search_application.get_behavioral_analytics(
  name: "my*"
)
$resp = $client->searchApplication()->getBehavioralAnalytics([
    "name" => "my*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my*"
client.searchApplication().getBehavioralAnalytics(g -> g
    .name("my*")
);
Response examples (200)
A successful response from `GET _application/analytics/my*`
{
  "my_analytics_collection": {
      "event_data_stream": {
          "name": "behavioral_analytics-events-my_analytics_collection"
      }
  },
  "my_analytics_collection2": {
      "event_data_stream": {
          "name": "behavioral_analytics-events-my_analytics_collection2"
      }
  }
}

Create a behavioral analytics collection Deprecated Technical preview; Added in 8.8.0

PUT /_application/analytics/{name}

Path parameters

  • name string Required

    The name of the analytics collection to be created or updated.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • name string Required
PUT /_application/analytics/{name}
PUT _application/analytics/my_analytics_collection
resp = client.search_application.put_behavioral_analytics(
    name="my_analytics_collection",
)
const response = await client.searchApplication.putBehavioralAnalytics({
  name: "my_analytics_collection",
});
response = client.search_application.put_behavioral_analytics(
  name: "my_analytics_collection"
)
$resp = $client->searchApplication()->putBehavioralAnalytics([
    "name" => "my_analytics_collection",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection"
client.searchApplication().putBehavioralAnalytics(p -> p
    .name("my_analytics_collection")
);

Delete a behavioral analytics collection Deprecated Technical preview; Added in 8.8.0

DELETE /_application/analytics/{name}

The associated data stream is also deleted.

Path parameters

  • name string Required

    The name of the analytics collection to be deleted

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_application/analytics/{name}
DELETE _application/analytics/my_analytics_collection/
resp = client.search_application.delete_behavioral_analytics(
    name="my_analytics_collection",
)
const response = await client.searchApplication.deleteBehavioralAnalytics({
  name: "my_analytics_collection",
});
response = client.search_application.delete_behavioral_analytics(
  name: "my_analytics_collection"
)
$resp = $client->searchApplication()->deleteBehavioralAnalytics([
    "name" => "my_analytics_collection",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection/"
client.searchApplication().deleteBehavioralAnalytics(d -> d
    .name("my_analytics_collection")
);

Create a behavioral analytics collection event Deprecated Technical preview

POST /_application/analytics/{collection_name}/event/{event_type} External documentation

Path parameters

  • collection_name string Required

    The name of the behavioral analytics collection.

  • event_type string

    The analytics event type.

    Values are page_view, search, or search_click.

Query parameters

  • debug boolean

    Whether the response type has to include more details

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • accepted boolean Required
    • event object
POST /_application/analytics/{collection_name}/event/{event_type}
POST _application/analytics/my_analytics_collection/event/search_click
{
  "session": {
    "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
  },
  "user": {
    "id": "5f26f01a-bbee-4202-9298-81261067abbd"
  },
  "search":{
    "query": "search term",
    "results": {
      "items": [
        {
          "document": {
            "id": "123",
            "index": "products"
          }
        }
      ],
      "total_results": 10
    },
    "sort": {
      "name": "relevance"
    },
    "search_application": "website"
  },
  "document":{
    "id": "123",
    "index": "products"
  }
}
resp = client.search_application.post_behavioral_analytics_event(
    collection_name="my_analytics_collection",
    event_type="search_click",
    payload={
        "session": {
            "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
        },
        "user": {
            "id": "5f26f01a-bbee-4202-9298-81261067abbd"
        },
        "search": {
            "query": "search term",
            "results": {
                "items": [
                    {
                        "document": {
                            "id": "123",
                            "index": "products"
                        }
                    }
                ],
                "total_results": 10
            },
            "sort": {
                "name": "relevance"
            },
            "search_application": "website"
        },
        "document": {
            "id": "123",
            "index": "products"
        }
    },
)
const response = await client.searchApplication.postBehavioralAnalyticsEvent({
  collection_name: "my_analytics_collection",
  event_type: "search_click",
  payload: {
    session: {
      id: "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9",
    },
    user: {
      id: "5f26f01a-bbee-4202-9298-81261067abbd",
    },
    search: {
      query: "search term",
      results: {
        items: [
          {
            document: {
              id: "123",
              index: "products",
            },
          },
        ],
        total_results: 10,
      },
      sort: {
        name: "relevance",
      },
      search_application: "website",
    },
    document: {
      id: "123",
      index: "products",
    },
  },
});
response = client.search_application.post_behavioral_analytics_event(
  collection_name: "my_analytics_collection",
  event_type: "search_click",
  body: {
    "session": {
      "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
    },
    "user": {
      "id": "5f26f01a-bbee-4202-9298-81261067abbd"
    },
    "search": {
      "query": "search term",
      "results": {
        "items": [
          {
            "document": {
              "id": "123",
              "index": "products"
            }
          }
        ],
        "total_results": 10
      },
      "sort": {
        "name": "relevance"
      },
      "search_application": "website"
    },
    "document": {
      "id": "123",
      "index": "products"
    }
  }
)
$resp = $client->searchApplication()->postBehavioralAnalyticsEvent([
    "collection_name" => "my_analytics_collection",
    "event_type" => "search_click",
    "body" => [
        "session" => [
            "id" => "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9",
        ],
        "user" => [
            "id" => "5f26f01a-bbee-4202-9298-81261067abbd",
        ],
        "search" => [
            "query" => "search term",
            "results" => [
                "items" => array(
                    [
                        "document" => [
                            "id" => "123",
                            "index" => "products",
                        ],
                    ],
                ),
                "total_results" => 10,
            ],
            "sort" => [
                "name" => "relevance",
            ],
            "search_application" => "website",
        ],
        "document" => [
            "id" => "123",
            "index" => "products",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"session":{"id":"1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"},"user":{"id":"5f26f01a-bbee-4202-9298-81261067abbd"},"search":{"query":"search term","results":{"items":[{"document":{"id":"123","index":"products"}}],"total_results":10},"sort":{"name":"relevance"},"search_application":"website"},"document":{"id":"123","index":"products"}}' "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection/event/search_click"
client.searchApplication().postBehavioralAnalyticsEvent(p -> p
    .collectionName("my_analytics_collection")
    .eventType(EventType.SearchClick)
    .payload(JsonData.fromJson("{\"session\":{\"id\":\"1797ca95-91c9-4e2e-b1bd-9c38e6f386a9\"},\"user\":{\"id\":\"5f26f01a-bbee-4202-9298-81261067abbd\"},\"search\":{\"query\":\"search term\",\"results\":{\"items\":[{\"document\":{\"id\":\"123\",\"index\":\"products\"}}],\"total_results\":10},\"sort\":{\"name\":\"relevance\"},\"search_application\":\"website\"},\"document\":{\"id\":\"123\",\"index\":\"products\"}}"))
);
Request example
Run `POST _application/analytics/my_analytics_collection/event/search_click` to send a `search_click` event to an analytics collection called `my_analytics_collection`.
{
  "session": {
    "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
  },
  "user": {
    "id": "5f26f01a-bbee-4202-9298-81261067abbd"
  },
  "search":{
    "query": "search term",
    "results": {
      "items": [
        {
          "document": {
            "id": "123",
            "index": "products"
          }
        }
      ],
      "total_results": 10
    },
    "sort": {
      "name": "relevance"
    },
    "search_application": "website"
  },
  "document":{
    "id": "123",
    "index": "products"
  }
}

Compact and aligned text (CAT)

The compact and aligned text (CAT) APIs aim are intended only for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, it's recommend to use a corresponding JSON API. All the cat commands accept a query string parameter help to see all the headers and info they provide, and the /_cat command alone lists all the available commands.

Get aliases Generally available

GET /_cat/aliases/{name}

All methods and paths for this operation:

GET /_cat/aliases

GET /_cat/aliases/{name}

Get the cluster's index aliases, including filter and routing information. This API does not return data stream aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string]

    A comma-separated list of aliases to retrieve. Supports wildcards (*). To retrieve all aliases, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • alias (or a): The name of the alias.
    • index (or i, idx): The name of the index the alias points to.
    • filter (or f, fi): The filter applied to the alias.
    • routing.index (or ri, routingIndex): Index routing value for the alias.
    • routing.search (or rs, routingSearch): Search routing value for the alias.
    • is_write_index (or w, isWriteIndex): Indicates if the index is the write index for the alias.

    Values are alias, a, index, i, idx, filter, f, fi, routing.index, ri, routingIndex, routing.search, rs, routingSearch, is_write_index, w, or isWriteIndex.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicated that the request should never timeout, you can set it to -1.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • alias string

      alias name

    • index string
    • filter string

      filter

    • routing.index string

      index routing

    • is_write_index string

      write index

GET _cat/aliases?format=json&v=true
resp = client.cat.aliases(
    format="json",
    v=True,
)
const response = await client.cat.aliases({
  format: "json",
  v: "true",
});
response = client.cat.aliases(
  format: "json",
  v: "true"
)
$resp = $client->cat()->aliases([
    "format" => "json",
    "v" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/aliases?format=json&v=true"
client.cat().aliases();
Response examples (200)
A successful response from `GET _cat/aliases?format=json&v=true`. This response shows that `alias2` has configured a filter and `alias3` and `alias4` have routing configurations.
[
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "-",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "*",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias3",
    "index": "test1",
    "filter": "-",
    "routing.index": "1",
    "routing.search": "1",
    "is_write_index": "true"
  },
  {
    "alias": "alias4",
    "index": "test1",
    "filter": "-",
    "routing.index": "2",
    "routing.search": "1,2",
    "is_write_index": "true"
  }
]

Get shard allocation information Generally available

GET /_cat/allocation/{node_id}

All methods and paths for this operation:

GET /_cat/allocation

GET /_cat/allocation/{node_id}

Get a snapshot of the number of shards allocated to each data node and their disk space.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • node_id string | array[string]

    A comma-separated list of node identifiers or names used to limit the returned information.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • shards (or s): The number of shards on the node.
    • shards.undesired: The number of shards scheduled to be moved elsewhere in the cluster.
    • write_load.forecast (or wlf, writeLoadForecast): The sum of index write load forecasts.
    • disk.indices.forecast (or dif, diskIndicesForecast): The sum of shard size forecasts.
    • disk.indices (or di, diskIndices): The disk space used by Elasticsearch indices.
    • disk.used (or du, diskUsed): The total disk space used on the node.
    • disk.avail (or da, diskAvail): The available disk space on the node.
    • disk.total (or dt, diskTotal): The total disk capacity of all volumes on the node.
    • disk.percent (or dp, diskPercent): The percentage of disk space used on the node.
    • host (or h): IThe host of the node.
    • ip: The IP address of the node.
    • node (or n): The name of the node.
    • node.role (or r, role, nodeRole): The roles assigned to the node.

    Values are shards, s, shards.undesired, write_load.forecast, wlf, writeLoadForecast, disk.indices.forecast, dif, diskIndicesForecast, disk.indices, di, diskIndices, disk.used, du, diskUsed, disk.avail, da, diskAvail, disk.total, dt, diskTotal, disk.percent, dp, diskPercent, host, h, ip, node, n, node.role, r, role, or nodeRole.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

GET /_cat/allocation?v=true&format=json
resp = client.cat.allocation(
    v=True,
    format="json",
)
const response = await client.cat.allocation({
  v: "true",
  format: "json",
});
response = client.cat.allocation(
  v: "true",
  format: "json"
)
$resp = $client->cat()->allocation([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/allocation?v=true&format=json"
client.cat().allocation();
Response examples (200)
A successful response from `GET /_cat/allocation?v=true&format=json`. It shows a single shard is allocated to the one node available.
[
  {
    "shards": "1",
    "shards.undesired": "0",
    "write_load.forecast": "0.0",
    "disk.indices.forecast": "260b",
    "disk.indices": "260b",
    "disk.used": "47.3gb",
    "disk.avail": "43.4gb",
    "disk.total": "100.7gb",
    "disk.percent": "46",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "CSUXak2",
    "node.role": "himrst"
  }
]

Get component templates Generally available; Added in 5.1.0

GET /_cat/component_templates/{name}

All methods and paths for this operation:

GET /_cat/component_templates

GET /_cat/component_templates/{name}

Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • name string Required

    The name of the component template. It accepts wildcard expressions. If it is omitted, all component templates are returned.

Query parameters

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • name (or n): The name of the component template.
    • version (or v): The version number of the component template.
    • alias_count (or a): The number of aliases in the component template.
    • mapping_count (or m): The number of mappings in the component template.
    • settings_count (or s): The number of settings in the component template.
    • metadata_count (or me): The number of metadata entries in the component template.
    • included_in (or i): The index templates that include this component template.

    Values are name, n, version, v, alias_count, a, mapping_count, m, settings_count, s, metadata_count, me, included_in, or i.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    The period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • name string Required
    • version string | null Required

    • alias_count string Required
    • mapping_count string Required
    • settings_count string Required
    • metadata_count string Required
    • included_in string Required
GET /_cat/component_templates/{name}
GET _cat/component_templates/my-template-*?v=true&s=name&format=json
resp = client.cat.component_templates(
    name="my-template-*",
    v=True,
    s="name",
    format="json",
)
const response = await client.cat.componentTemplates({
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json",
});
response = client.cat.component_templates(
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json"
)
$resp = $client->cat()->componentTemplates([
    "name" => "my-template-*",
    "v" => "true",
    "s" => "name",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/component_templates/my-template-*?v=true&s=name&format=json"
client.cat().componentTemplates();
Response examples (200)
A successful response from `GET _cat/component_templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-1",
    "version": "null",
    "alias_count": "0",
    "mapping_count": "0",
    "settings_count": "1",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  },
    {
    "name": "my-template-2",
    "version": null,
    "alias_count": "0",
    "mapping_count": "3",
    "settings_count": "0",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  }
]

Get a document count Generally available

GET /_cat/count/{index}

All methods and paths for this operation:

GET /_cat/count

GET /_cat/count/{index}

Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.

Required authorization

  • Index privileges: read

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. It supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • epoch (or t, time): The Unix epoch time in seconds since 1970-01-01 00:00:00.
    • timestamp (or ts, hms, hhmmss): The current time in HH:MM:SS format.
    • count (or dc, docs.count, docsCount): The document count in the cluster or index.

    Values are epoch, t, time, timestamp, ts, hms, hhmmss, count, dc, docs.count, or docsCount.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      One of:

      Time unit for seconds

    • timestamp string

      Time of day, expressed as HH:MM:SS

    • count string

      the document count

GET /_cat/count/my-index-000001?v=true&format=json
resp = client.cat.count(
    index="my-index-000001",
    v=True,
    format="json",
)
const response = await client.cat.count({
  index: "my-index-000001",
  v: "true",
  format: "json",
});
response = client.cat.count(
  index: "my-index-000001",
  v: "true",
  format: "json"
)
$resp = $client->cat()->count([
    "index" => "my-index-000001",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/count/my-index-000001?v=true&format=json"
client.cat().count();
Response examples (200)
A successful response from `GET /_cat/count/my-index-000001?v=true&format=json`. It retrieves the document count for the `my-index-000001` data stream or index.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "120"
  }
]
A successful response from `GET /_cat/count?v=true&format=json`. It retrieves the document count for all data streams and indices in the cluster.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "121"
  }
]

Get field data cache information Generally available

GET /_cat/fielddata/{fields}

All methods and paths for this operation:

GET /_cat/fielddata

GET /_cat/fielddata/{fields}

Get the amount of heap memory currently used by the field data cache on every data node in the cluster.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • fields string | array[string] Required

    Comma-separated list of fields used to limit returned information. To retrieve all fields, omit this parameter.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • fields string | array[string]

    Comma-separated list of fields used to limit returned information.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • id: The node ID.
    • host (or h): The host name of the node.
    • ip: The IP address of the node.
    • node (or n): The node name.
    • field (or f): The field name.
    • size (or s): The field data usage.

    Values are id, host, h, ip, node, n, field, f, size, or s.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      node id

    • host string

      host name

    • ip string

      ip address

    • node string

      node name

    • field string

      field name

    • size string

      field data usage

GET /_cat/fielddata?v=true&fields=body&format=json
resp = client.cat.fielddata(
    v=True,
    fields="body",
    format="json",
)
const response = await client.cat.fielddata({
  v: "true",
  fields: "body",
  format: "json",
});
response = client.cat.fielddata(
  v: "true",
  fields: "body",
  format: "json"
)
$resp = $client->cat()->fielddata([
    "v" => "true",
    "fields" => "body",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/fielddata?v=true&fields=body&format=json"
client.cat().fielddata();
Response examples (200)
A successful response from `GET /_cat/fielddata?v=true&fields=body&format=json`. You can specify an individual field in the request body or URL path. This example retrieves heap memory size information for the `body` field.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  }
]
A successful response from `GET /_cat/fielddata/body,soul?v=true&format=json`. You can specify a comma-separated list of fields in the request body or URL path. This example retrieves heap memory size information for the `body` and `soul` fields. To get information for all fields, run `GET /_cat/fielddata?v=true`.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "1127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  },
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "soul",
    "size": "480b"
  }
]

Get the cluster health status Generally available

GET /_cat/health

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the cluster health API. This API is often used to check malfunctioning clusters. To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats: HH:MM:SS, which is human-readable but includes no date information; Unix epoch time, which is machine-sortable and includes date information. The latter format is useful for cluster recoveries that take multiple days. You can use the cat health API to verify cluster health across multiple nodes. You also can use the API to track the recovery of a large cluster over a longer period of time.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • ts boolean

    If true, returns HH:MM:SS and Unix epoch timestamps.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      One of:

      Time unit for seconds

    • timestamp string

      Time of day, expressed as HH:MM:SS

    • cluster string

      cluster name

    • status string

      health status

    • node.total string

      total number of nodes

    • node.data string

      number of nodes that can store data

    • shards string

      total number of shards

    • pri string

      number of primary shards

    • relo string

      number of relocating nodes

    • init string

      number of initializing nodes

    • unassign.pri string

      number of unassigned primary shards

    • unassign string

      number of unassigned shards

    • pending_tasks string

      number of pending tasks

    • max_task_wait_time string

      wait time of longest task pending

    • active_shards_percent string

      active number of shards in percent

GET /_cat/health?v=true&format=json
resp = client.cat.health(
    v=True,
    format="json",
)
const response = await client.cat.health({
  v: "true",
  format: "json",
});
response = client.cat.health(
  v: "true",
  format: "json"
)
$resp = $client->cat()->health([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/health?v=true&format=json"
client.cat().health();
Response examples (200)
A successful response from `GET /_cat/health?v=true&format=json`. By default, it returns `HH:MM:SS` and Unix epoch timestamps.
[
  {
    "epoch": "1475871424",
    "timestamp": "16:17:04",
    "cluster": "elasticsearch",
    "status": "green",
    "node.total": "1",
    "node.data": "1",
    "shards": "1",
    "pri": "1",
    "relo": "0",
    "init": "0",
    "unassign": "0",
    "unassign.pri": "0",
    "pending_tasks": "0",
    "max_task_wait_time": "-",
    "active_shards_percent": "100.0%"
  }
]

Get CAT help Generally available

GET /_cat

Get help for the CAT APIs.

Responses

  • 200 application/json
GET /_cat
curl \
 --request GET 'http://api.example.com/_cat' \
 --header "Authorization: $API_KEY"

Get index information Generally available

GET /_cat/indices/{index}

All methods and paths for this operation:

GET /_cat/indices

GET /_cat/indices/{index}

Get high-level information about indices in a cluster, including backing indices for data streams.

Use this request to get the following information for each index in a cluster:

  • shard count
  • document count
  • deleted document count
  • primary store size
  • total store size of all shards, including shard replicas

These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.

CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • health string

    The health status used to limit returned indices. By default, the response includes indices of any health status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
    • unknown
    • unavailable

    Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

  • include_unloaded_segments boolean

    If true, the response includes information from segments that are not loaded into memory.

  • pri boolean

    If true, the response only includes information from primary shards.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • health string

      current health status

    • status string

      open/close status

    • index string

      index name

    • uuid string

      index uuid

    • pri string

      number of primary shards

    • rep string

      number of replica shards

    • docs.count string | null

      available docs

    • docs.deleted string | null

      deleted docs

    • creation.date string

      index creation date (millisecond value)

    • creation.date.string string

      index creation date (as string)

    • store.size string | null

      store size of primaries & replicas

    • pri.store.size string | null

      store size of primaries

    • dataset.size string | null

      total size of dataset (including the cache for partially mounted indices)

    • completion.size string

      size of completion

    • pri.completion.size string

      size of completion

    • fielddata.memory_size string

      used fielddata cache

    • pri.fielddata.memory_size string

      used fielddata cache

    • fielddata.evictions string

      fielddata evictions

    • pri.fielddata.evictions string

      fielddata evictions

    • query_cache.memory_size string

      used query cache

    • pri.query_cache.memory_size string

      used query cache

    • query_cache.evictions string

      query cache evictions

    • pri.query_cache.evictions string

      query cache evictions

    • request_cache.memory_size string

      used request cache

    • pri.request_cache.memory_size string

      used request cache

    • request_cache.evictions string

      request cache evictions

    • pri.request_cache.evictions string

      request cache evictions

    • request_cache.hit_count string

      request cache hit count

    • pri.request_cache.hit_count string

      request cache hit count

    • request_cache.miss_count string

      request cache miss count

    • pri.request_cache.miss_count string

      request cache miss count

    • flush.total string

      number of flushes

    • pri.flush.total string

      number of flushes

    • flush.total_time string

      time spent in flush

    • pri.flush.total_time string

      time spent in flush

    • get.current string

      number of current get ops

    • pri.get.current string

      number of current get ops

    • get.time string

      time spent in get

    • pri.get.time string

      time spent in get

    • get.total string

      number of get ops

    • pri.get.total string

      number of get ops

    • get.exists_time string

      time spent in successful gets

    • pri.get.exists_time string

      time spent in successful gets

    • get.exists_total string

      number of successful gets

    • pri.get.exists_total string

      number of successful gets

    • get.missing_time string

      time spent in failed gets

    • pri.get.missing_time string

      time spent in failed gets

    • get.missing_total string

      number of failed gets

    • pri.get.missing_total string

      number of failed gets

    • indexing.delete_current string

      number of current deletions

    • pri.indexing.delete_current string

      number of current deletions

    • indexing.delete_time string

      time spent in deletions

    • pri.indexing.delete_time string

      time spent in deletions

    • indexing.delete_total string

      number of delete ops

    • pri.indexing.delete_total string

      number of delete ops

    • indexing.index_current string

      number of current indexing ops

    • pri.indexing.index_current string

      number of current indexing ops

    • indexing.index_time string

      time spent in indexing

    • pri.indexing.index_time string

      time spent in indexing

    • indexing.index_total string

      number of indexing ops

    • pri.indexing.index_total string

      number of indexing ops

    • indexing.index_failed string

      number of failed indexing ops

    • pri.indexing.index_failed string

      number of failed indexing ops

    • merges.current string

      number of current merges

    • pri.merges.current string

      number of current merges

    • merges.current_docs string

      number of current merging docs

    • pri.merges.current_docs string

      number of current merging docs

    • merges.current_size string

      size of current merges

    • pri.merges.current_size string

      size of current merges

    • merges.total string

      number of completed merge ops

    • pri.merges.total string

      number of completed merge ops

    • merges.total_docs string

      docs merged

    • pri.merges.total_docs string

      docs merged

    • merges.total_size string

      size merged

    • pri.merges.total_size string

      size merged

    • merges.total_time string

      time spent in merges

    • pri.merges.total_time string

      time spent in merges

    • refresh.total string

      total refreshes

    • pri.refresh.total string

      total refreshes

    • refresh.time string

      time spent in refreshes

    • pri.refresh.time string

      time spent in refreshes

    • refresh.external_total string

      total external refreshes

    • pri.refresh.external_total string

      total external refreshes

    • refresh.external_time string

      time spent in external refreshes

    • pri.refresh.external_time string

      time spent in external refreshes

    • refresh.listeners string

      number of pending refresh listeners

    • pri.refresh.listeners string

      number of pending refresh listeners

    • search.fetch_current string

      current fetch phase ops

    • pri.search.fetch_current string

      current fetch phase ops

    • search.fetch_time string

      time spent in fetch phase

    • pri.search.fetch_time string

      time spent in fetch phase

    • search.fetch_total string

      total fetch ops

    • pri.search.fetch_total string

      total fetch ops

    • search.open_contexts string

      open search contexts

    • pri.search.open_contexts string

      open search contexts

    • search.query_current string

      current query phase ops

    • pri.search.query_current string

      current query phase ops

    • search.query_time string

      time spent in query phase

    • pri.search.query_time string

      time spent in query phase

    • search.query_total string

      total query phase ops

    • pri.search.query_total string

      total query phase ops

    • search.scroll_current string

      open scroll contexts

    • pri.search.scroll_current string

      open scroll contexts

    • search.scroll_time string

      time scroll contexts held open

    • pri.search.scroll_time string

      time scroll contexts held open

    • search.scroll_total string

      completed scroll contexts

    • pri.search.scroll_total string

      completed scroll contexts

    • segments.count string

      number of segments

    • pri.segments.count string

      number of segments

    • segments.memory string

      memory used by segments

    • pri.segments.memory string

      memory used by segments

    • segments.index_writer_memory string

      memory used by index writer

    • pri.segments.index_writer_memory string

      memory used by index writer

    • segments.version_map_memory string

      memory used by version map

    • pri.segments.version_map_memory string

      memory used by version map

    • segments.fixed_bitset_memory string

      memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields

    • pri.segments.fixed_bitset_memory string

      memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields

    • warmer.current string

      current warmer ops

    • pri.warmer.current string

      current warmer ops

    • warmer.total string

      total warmer ops

    • pri.warmer.total string

      total warmer ops

    • warmer.total_time string

      time spent in warmers

    • pri.warmer.total_time string

      time spent in warmers

    • suggest.current string

      number of current suggest ops

    • pri.suggest.current string

      number of current suggest ops

    • suggest.time string

      time spend in suggest

    • pri.suggest.time string

      time spend in suggest

    • suggest.total string

      number of suggest ops

    • pri.suggest.total string

      number of suggest ops

    • memory.total string

      total used memory

    • pri.memory.total string

      total user memory

    • search.throttled string

      indicates if the index is search throttled

    • bulk.total_operations string

      number of bulk shard ops

    • pri.bulk.total_operations string

      number of bulk shard ops

    • bulk.total_time string

      time spend in shard bulk

    • pri.bulk.total_time string

      time spend in shard bulk

    • bulk.total_size_in_bytes string

      total size in bytes of shard bulk

    • pri.bulk.total_size_in_bytes string

      total size in bytes of shard bulk

    • bulk.avg_time string

      average time spend in shard bulk

    • pri.bulk.avg_time string

      average time spend in shard bulk

    • bulk.avg_size_in_bytes string

      average size in bytes of shard bulk

    • pri.bulk.avg_size_in_bytes string

      average size in bytes of shard bulk

GET /_cat/indices/my-index-*?v=true&s=index&format=json
resp = client.cat.indices(
    index="my-index-*",
    v=True,
    s="index",
    format="json",
)
const response = await client.cat.indices({
  index: "my-index-*",
  v: "true",
  s: "index",
  format: "json",
});
response = client.cat.indices(
  index: "my-index-*",
  v: "true",
  s: "index",
  format: "json"
)
$resp = $client->cat()->indices([
    "index" => "my-index-*",
    "v" => "true",
    "s" => "index",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/indices/my-index-*?v=true&s=index&format=json"
client.cat().indices();
Response examples (200)
A successful response from `GET /_cat/indices/my-index-*?v=true&s=index&format=json`.
[
  {
    "health": "yellow",
    "status": "open",
    "index": "my-index-000001",
    "uuid": "u8FNjxh8Rfy_awN11oDKYQ",
    "pri": "1",
    "rep": "1",
    "docs.count": "1200",
    "docs.deleted": "0",
    "store.size": "88.1kb",
    "pri.store.size": "88.1kb",
    "dataset.size": "88.1kb"
  },
  {
    "health": "green",
    "status": "open",
    "index": "my-index-000002",
    "uuid": "nYFWZEO7TUiOjLQXBaYJpA ",
    "pri": "1",
    "rep": "0",
    "docs.count": "0",
    "docs.deleted": "0",
    "store.size": "260b",
    "pri.store.size": "260b",
    "dataset.size": "260b"
  }
]

Get master node information Generally available

GET /_cat/master

Get information about the master node, including the ID, bound IP address, and name.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      node id

    • host string

      host name

    • ip string

      ip address

    • node string

      node name

GET /_cat/master?v=true&format=json
resp = client.cat.master(
    v=True,
    format="json",
)
const response = await client.cat.master({
  v: "true",
  format: "json",
});
response = client.cat.master(
  v: "true",
  format: "json"
)
$resp = $client->cat()->master([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/master?v=true&format=json"
client.cat().master();
Response examples (200)
A successful response from `GET /_cat/master?v=true&format=json`.
[
  {
    "id": "YzWoH_2BT-6UjVGDyPdqYg",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "YzWoH_2"
  }
]

Get data frame analytics jobs Generally available; Added in 7.7.0

GET /_cat/ml/data_frame/analytics/{id}

All methods and paths for this operation:

GET /_cat/ml/data_frame/analytics

GET /_cat/ml/data_frame/analytics/{id}

Get configuration and usage information about data frame analytics jobs.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • id string Required

    The ID of the data frame analytics to fetch

Query parameters

  • allow_no_match boolean

    Whether to ignore if a wildcard expression matches no configs. (This includes _all string or when no configs have been specified)

  • bytes string

    The unit in which to display byte values

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.
  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • type string

      The type of analysis that the job performs.

    • create_time string

      The time when the job was created.

    • version string
    • source_index string
    • dest_index string
    • description string

      A description of the job.

    • model_memory_limit string

      The approximate maximum amount of memory resources that are permitted for the job.

    • state string

      The current status of the job.

    • failure_reason string

      Messages about the reason why the job failed.

    • progress string

      The progress report for the job by phase.

    • assignment_explanation string

      Messages related to the selection of a node.

    • node.id string
    • node.name string
    • node.ephemeral_id string
    • node.address string

      The network address of the assigned node.

GET /_cat/ml/data_frame/analytics/{id}
GET _cat/ml/data_frame/analytics?v=true&format=json
resp = client.cat.ml_data_frame_analytics(
    v=True,
    format="json",
)
const response = await client.cat.mlDataFrameAnalytics({
  v: "true",
  format: "json",
});
response = client.cat.ml_data_frame_analytics(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlDataFrameAnalytics([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/data_frame/analytics?v=true&format=json"
client.cat().mlDataFrameAnalytics();
Response examples (200)
A successful response from `GET _cat/ml/data_frame/analytics?v=true&format=json`.
[
  {
    "id": "classifier_job_1",
    "type": "classification",
    "create_time": "2020-02-12T11:49:09.594Z",
    "state": "stopped"
  },
    {
    "id": "classifier_job_2",
    "type": "classification",
    "create_time": "2020-02-12T11:49:14.479Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_3",
    "type": "classification",
    "create_time": "2020-02-12T11:49:16.928Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_4",
    "type": "classification",
    "create_time": "2020-02-12T11:49:19.127Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_5",
    "type": "classification",
    "create_time": "2020-02-12T11:49:21.349Z",
    "state": "stopped"
  }
]

Get datafeeds Generally available; Added in 7.7.0

GET /_cat/ml/datafeeds/{datafeed_id}

All methods and paths for this operation:

GET /_cat/ml/datafeeds

GET /_cat/ml/datafeeds/{datafeed_id}

Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The datafeed identifier.

    • state string

      Values are started, stopped, starting, or stopping.

    • assignment_explanation string

      For started datafeeds only, contains messages relating to the selection of a node.

    • buckets.count string

      The number of buckets processed.

    • search.count string

      The number of searches run by the datafeed.

    • search.time string

      The total time the datafeed spent searching, in milliseconds.

    • search.bucket_avg string

      The average search time per bucket, in milliseconds.

    • search.exp_avg_hour string

      The exponential average search time per hour, in milliseconds.

    • node.id string

      The unique identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.name string

      The name of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.ephemeral_id string

      The ephemeral identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.address string

      The network address of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

GET /_cat/ml/datafeeds/{datafeed_id}
GET _cat/ml/datafeeds?v=true&format=json
resp = client.cat.ml_datafeeds(
    v=True,
    format="json",
)
const response = await client.cat.mlDatafeeds({
  v: "true",
  format: "json",
});
response = client.cat.ml_datafeeds(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlDatafeeds([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/datafeeds?v=true&format=json"
client.cat().mlDatafeeds();
Response examples (200)
A successful response from `GET _cat/ml/datafeeds?v=true&format=json`.
[
  {
    "id": "datafeed-high_sum_total_sales",
    "state": "stopped",
    "buckets.count": "743",
    "search.count": "7"
  },
  {
    "id": "datafeed-low_request_rate",
    "state": "stopped",
    "buckets.count": "1457",
    "search.count": "3"
  },
  {
    "id": "datafeed-response_code_rates",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  },
  {
    "id": "datafeed-url_scanning",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  }
]

Get anomaly detection jobs Generally available; Added in 7.7.0

GET /_cat/ml/anomaly_detectors/{job_id}

All methods and paths for this operation:

GET /_cat/ml/anomaly_detectors

GET /_cat/ml/anomaly_detectors/{job_id}

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      Values are closing, closed, opened, failed, or opening.

    • opened_time string

      For open jobs only, the amount of time the job has been opened.

    • assignment_explanation string

      For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • data.processed_records string

      The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • data.processed_fields string

      The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • data.input_bytes number | string

    • data.input_records string

      The number of input documents posted to the anomaly detection job.

    • data.input_fields string

      The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • data.invalid_dates string

      The number of input documents with either a missing date field or a date that could not be parsed.

    • data.missing_fields string

      The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • data.out_of_order_timestamps string

      The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • data.empty_buckets string

      The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • data.sparse_buckets string

      The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • data.buckets string

      The total number of buckets processed.

    • data.earliest_record string

      The timestamp of the earliest chronologically input document.

    • data.latest_record string

      The timestamp of the latest chronologically input document.

    • data.last string

      The timestamp at which data was last analyzed, according to server time.

    • data.last_empty_bucket string

      The timestamp of the last bucket that did not contain any data.

    • data.last_sparse_bucket string

      The timestamp of the last bucket that was considered sparse.

    • model.bytes number | string

    • model.memory_status string

      Values are ok, soft_limit, or hard_limit.

    • model.bytes_exceeded number | string

    • model.memory_limit string

      The upper limit for model memory usage, checked on increasing values.

    • model.by_fields string

      The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.over_fields string

      The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.partition_fields string

      The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.bucket_allocation_failures string

      The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • model.categorization_status string

      Values are ok or warn.

    • model.categorized_doc_count string

      The number of documents that have had a field categorized.

    • model.total_category_count string

      The number of categories created by categorization.

    • model.frequent_category_count string

      The number of categories that match more than 1% of categorized documents.

    • model.rare_category_count string

      The number of categories that match just one categorized document.

    • model.dead_category_count string

      The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • model.failed_category_count string

      The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • model.log_time string

      The timestamp when the model stats were gathered, according to server time.

    • model.timestamp string

      The timestamp of the last record when the model stats were gathered.

    • forecasts.total string

      The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • forecasts.memory.min string

      The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.max string

      The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.avg string

      The average memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.total string

      The total memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.records.min string

      The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.max string

      The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.avg string

      The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.total string

      The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.time.min string

      The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.max string

      The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.avg string

      The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.total string

      The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string
    • node.name string

      The name of the assigned node.

    • node.ephemeral_id string
    • node.address string

      The network address of the assigned node.

    • buckets.count string

      The number of bucket results produced by the job.

    • buckets.time.total string

      The sum of all bucket processing times, in milliseconds.

    • buckets.time.min string

      The minimum of all bucket processing times, in milliseconds.

    • buckets.time.max string

      The maximum of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg string

      The exponential moving average of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg_hour string

      The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors/{job_id}
GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json
resp = client.cat.ml_jobs(
    h="id,s,dpr,mb",
    v=True,
    format="json",
)
const response = await client.cat.mlJobs({
  h: "id,s,dpr,mb",
  v: "true",
  format: "json",
});
response = client.cat.ml_jobs(
  h: "id,s,dpr,mb",
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlJobs([
    "h" => "id,s,dpr,mb",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json"
client.cat().mlJobs();
Response examples (200)
A successful response from `GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json`.
[
  {
    "id": "high_sum_total_sales",
    "s": "closed",
    "dpr": "14022",
    "mb": "1.5mb"
  },
  {
    "id": "low_request_rate",
    "s": "closed",
    "dpr": "1216",
    "mb": "40.5kb"
  },
  {
    "id": "response_code_rates",
    "s": "closed",
    "dpr": "28146",
    "mb": "132.7kb"
  },
  {
    "id": "url_scanning",
    "s": "closed",
    "dpr": "28146",
    "mb": "501.6kb"
  }
]

Get trained models Generally available; Added in 7.7.0

GET /_cat/ml/trained_models/{model_id}

All methods and paths for this operation:

GET /_cat/ml/trained_models

GET /_cat/ml/trained_models/{model_id}

Get configuration and usage information about inference trained models.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • model_id string Required

    A unique identifier for the trained model.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, the API returns an empty array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of column names to display.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • s string | array[string]

    A comma-separated list of column names or aliases used to sort the response.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • from number

    Skips the specified number of transforms.

  • size number

    The maximum number of transforms to display.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • created_by string

      Information about the creator of the model.

    • heap_size number | string

    • operations string

      The estimated number of operations to use the model. This number helps to measure the computational complexity of the model.

    • license string

      The license level of the model.

    • create_time string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • version string
    • description string

      A description of the model.

    • ingest.pipelines string

      The number of pipelines that are referencing the model.

    • ingest.count string

      The total number of documents that are processed by the model.

    • ingest.time string

      The total time spent processing documents with thie model.

    • ingest.current string

      The total number of documents that are currently being handled by the model.

    • ingest.failed string

      The total number of failed ingest attempts with the model.

    • data_frame.id string

      The identifier for the data frame analytics job that created the model. Only displayed if the job is still available.

    • data_frame.create_time string

      The time the data frame analytics job was created.

    • data_frame.source_index string

      The source index used to train in the data frame analysis.

    • data_frame.analysis string

      The analysis used by the data frame to build the model.

    • type string Generally available; Added in 8.0.0
GET /_cat/ml/trained_models/{model_id}
GET _cat/ml/trained_models?v=true&format=json
resp = client.cat.ml_trained_models(
    v=True,
    format="json",
)
const response = await client.cat.mlTrainedModels({
  v: "true",
  format: "json",
});
response = client.cat.ml_trained_models(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlTrainedModels([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/trained_models?v=true&format=json"
client.cat().mlTrainedModels();
Response examples (200)
A successful response from `GET _cat/ml/trained_models?v=true&format=json`.
[
  {
    "id": "ddddd-1580216177138",
    "heap_size": "0b",
    "operations": "196",
    "create_time": "2025-03-25T00:01:38.662Z",
    "type": "pytorch",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  },
  {
    "id": "lang_ident_model_1",
    "heap_size": "1mb",
    "operations": "39629",
    "create_time": "2019-12-05T12:28:34.594Z",
    "type": "lang_ident",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  }
]

Get node attribute information Generally available

GET /_cat/nodeattrs

Get information about custom node attributes. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • node string

      The node name.

    • id string

      The unique node identifier.

    • pid string

      The process identifier.

    • host string

      The host name.

    • ip string

      The IP address.

    • port string

      The bound transport port.

    • attr string

      The attribute name.

    • value string

      The attribute value.

GET /_cat/nodeattrs?v=true&format=json
resp = client.cat.nodeattrs(
    v=True,
    format="json",
)
const response = await client.cat.nodeattrs({
  v: "true",
  format: "json",
});
response = client.cat.nodeattrs(
  v: "true",
  format: "json"
)
$resp = $client->cat()->nodeattrs([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/nodeattrs?v=true&format=json"
client.cat().nodeattrs();
Response examples (200)
A successful response from `GET /_cat/nodeattrs?v=true&format=json`. The `node`, `host`, and `ip` columns provide basic information about each node. The `attr` and `value` columns return custom node attributes, one per line.
[
  {
    "node": "node-0",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "attr": "testattr",
    "value": "test"
  }
]
A successful response from `GET /_cat/nodeattrs?v=true&h=name,pid,attr,value`. It returns the `name`, `pid`, `attr`, and `value` columns.
[
  {
    "name": "node-0",
    "pid": "19566",
    "attr": "testattr",
    "value": "test"
  }
]

Get node information Generally available

GET /_cat/nodes

Get information about the nodes in a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • full_id boolean | string

    If true, return the full node ID. If false, return the shortened node ID.

  • include_unloaded_segments boolean

    If true, the response includes information from segments that are not loaded into memory.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • build (or b): The Elasticsearch build hash. For example: 5c03844.
    • completion.size (or cs, completionSize): The size of completion. For example: 0b.
    • cpu: The percentage of recent system CPU used.
    • disk.avail (or d, disk, diskAvail): The available disk space. For example: 198.4gb.
    • disk.total (or dt, diskTotal): The total disk space. For example: 458.3gb.
    • disk.used (or du, diskUsed): The used disk space. For example: 259.8gb.
    • disk.used_percent (or dup, diskUsedPercent): The percentage of disk space used.
    • fielddata.evictions (or fe, fielddataEvictions): The number of fielddata cache evictions.
    • fielddata.memory_size (or fm, fielddataMemory): The fielddata cache memory used. For example: 0b.
    • file_desc.current (or fdc, fileDescriptorCurrent): The number of file descriptors used.
    • file_desc.max (or fdm, fileDescriptorMax): The maximum number of file descriptors.
    • file_desc.percent (or fdp, fileDescriptorPercent): The percentage of file descriptors used.
    • flush.total (or ft, flushTotal): The number of flushes.
    • flush.total_time (or ftt, flushTotalTime): The amount of time spent in flush.
    • get.current (or gc, getCurrent): The number of current get operations.
    • get.exists_time (or geti, getExistsTime): The time spent in successful get operations. For example: 14ms.
    • get.exists_total (or geto, getExistsTotal): The number of successful get operations.
    • get.missing_time (or gmti, getMissingTime): The time spent in failed get operations. For example: 0s.
    • get.missing_total (or gmto, getMissingTotal): The number of failed get operations.
    • get.time (or gti, getTime): The amount of time spent in get operations. For example: 14ms.
    • get.total (or gto, getTotal): The number of get operations.
    • heap.current (or hc, heapCurrent): The used heap size. For example: 311.2mb.
    • heap.max (or hm, heapMax): The total heap size. For example: 4gb.
    • heap.percent (or hp, heapPercent): The used percentage of total allocated Elasticsearch JVM heap. This value reflects only the Elasticsearch process running within the operating system and is the most direct indicator of its JVM, heap, or memory resource performance.
    • http_address (or http): The bound HTTP address.
    • id (or nodeId): The identifier for the node.
    • indexing.delete_current (or idc, indexingDeleteCurrent): The number of current deletion operations.
    • indexing.delete_time (or idti, indexingDeleteTime): The time spent in deletion operations. For example: 2ms.
    • indexing.delete_total (or idto, indexingDeleteTotal): The number of deletion operations.
    • indexing.index_current (or iic, indexingIndexCurrent): The number of current indexing operations.
    • indexing.index_failed (or iif, indexingIndexFailed): The number of failed indexing operations.
    • indexing.index_failed_due_to_version_conflict (or iifvc, indexingIndexFailedDueToVersionConflict): The number of indexing operations that failed due to version conflict.
    • indexing.index_time (or iiti, indexingIndexTime): The time spent in indexing operations. For example: 134ms.
    • indexing.index_total (or iito, indexingIndexTotal): The number of indexing operations.
    • ip (or i): The IP address.
    • jdk (or j): The Java version. For example: 1.8.0.
    • load_1m (or l): The most recent load average. For example: 0.22.
    • load_5m (or l): The load average for the last five minutes. For example: 0.78.
    • load_15m (or l): The load average for the last fifteen minutes. For example: 1.24.
    • mappings.total_count (or mtc, mappingsTotalCount): The number of mappings, including runtime and object fields.
    • mappings.total_estimated_overhead_in_bytes (or mteo, mappingsTotalEstimatedOverheadInBytes): The estimated heap overhead, in bytes, of mappings on this node, which allows for 1KiB of heap for every mapped field.
    • master (or m): Indicates whether the node is the elected master node. Returned values include * (elected master) and - (not elected master).
    • merges.current (or mc, mergesCurrent): The number of current merge operations.
    • merges.current_docs (or mcd, mergesCurrentDocs): The number of current merging documents.
    • merges.current_size (or mcs, mergesCurrentSize): The size of current merges. For example: 0b.
    • merges.total (or mt, mergesTotal): The number of completed merge operations.
    • merges.total_docs (or mtd, mergesTotalDocs): The number of merged documents.
    • merges.total_size (or mts, mergesTotalSize): The total size of merges. For example: 0b.
    • merges.total_time (or mtt, mergesTotalTime): The time spent merging documents. For example: 0s.
    • name (or n): The node name.
    • node.role (or r, role, nodeRole): The roles of the node. Returned values include c (cold node), d (data node), f (frozen node), h (hot node), i (ingest node), l (machine learning node), m (master-eligible node), r (remote cluster client node), s (content node), t (transform node), v (voting-only node), w (warm node), and - (coordinating node only). For example, dim indicates a master-eligible data and ingest node.
    • pid (or p): The process identifier.
    • port (or po): The bound transport port number.
    • query_cache.memory_size (or qcm, queryCacheMemory): The used query cache memory. For example: 0b.
    • query_cache.evictions (or qce, queryCacheEvictions): The number of query cache evictions.
    • query_cache.hit_count (or qchc, queryCacheHitCount): The query cache hit count.
    • query_cache.miss_count (or qcmc, queryCacheMissCount): The query cache miss count.
    • ram.current (or rc, ramCurrent): The used total memory. For example: 513.4mb.
    • ram.max (or rm, ramMax): The total memory. For example: 2.9gb.
    • ram.percent (or rp, ramPercent): The used percentage of the total operating system memory. This reflects all processes running on the operating system instead of only Elasticsearch and is not guaranteed to correlate to its performance.
    • refresh.total (or rto, refreshTotal): The number of refresh operations.
    • refresh.time (or rti, refreshTime): The time spent in refresh operations. For example: 91ms.
    • request_cache.memory_size (or rcm, requestCacheMemory): The used request cache memory. For example: 0b.
    • request_cache.evictions (or rce, requestCacheEvictions): The number of request cache evictions.
    • request_cache.hit_count (or rchc, requestCacheHitCount): The request cache hit count.
    • request_cache.miss_count (or rcmc, requestCacheMissCount): The request cache miss count.
    • script.compilations (or scrcc, scriptCompilations): The number of total script compilations.
    • script.cache_evictions (or scrce, scriptCacheEvictions): The number of total compiled scripts evicted from cache.
    • search.fetch_current (or sfc, searchFetchCurrent): The number of current fetch phase operations.
    • search.fetch_time (or sfti, searchFetchTime): The time spent in fetch phase. For example: 37ms.
    • search.fetch_total (or sfto, searchFetchTotal): The number of fetch operations.
    • search.open_contexts (or so, searchOpenContexts): The number of open search contexts.
    • search.query_current (or sqc, searchQueryCurrent): The number of current query phase operations.
    • search.query_time (or sqti, searchQueryTime): The time spent in query phase. For example: 43ms.
    • search.query_total (or sqto, searchQueryTotal): The number of query operations.
    • search.scroll_current (or scc, searchScrollCurrent): The number of open scroll contexts.
    • search.scroll_time (or scti, searchScrollTime): The amount of time scroll contexts were held open. For example: 2m.
    • search.scroll_total (or scto, searchScrollTotal): The number of completed scroll contexts.
    • segments.count (or sc, segmentsCount): The number of segments.
    • segments.fixed_bitset_memory (or sfbm, fixedBitsetMemory): The memory used by fixed bit sets for nested object field types and type filters for types referred in join fields. For example: 1.0kb.
    • segments.index_writer_memory (or siwm, segmentsIndexWriterMemory): The memory used by the index writer. For example: 18mb.
    • segments.memory (or sm, segmentsMemory): The memory used by segments. For example: 1.4kb.
    • segments.version_map_memory (or svmm, segmentsVersionMapMemory): The memory used by the version map. For example: 1.0kb.
    • shard_stats.total_count (or sstc, shards, shardStatsTotalCount): The number of shards assigned.
    • suggest.current (or suc, suggestCurrent): The number of current suggest operations.
    • suggest.time (or suti, suggestTime): The time spent in suggest operations.
    • suggest.total (or suto, suggestTotal): The number of suggest operations.
    • uptime (or u): The amount of node uptime. For example: 17.3m.
    • version (or v): The Elasticsearch version. For example: 9.0.0.
  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • master_timeout string

    The period to wait for a connection to the master node.

    Values are -1 or 0.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • pid string

      The process identifier.

    • ip string

      The IP address.

    • port string

      The bound transport port.

    • http_address string

      The bound HTTP address.

    • version string
    • flavor string

      The Elasticsearch distribution flavor.

    • type string

      The Elasticsearch distribution type.

    • build string

      The Elasticsearch build hash.

    • jdk string

      The Java version.

    • disk.total number | string

    • disk.used number | string

    • disk.avail number | string

    • disk.used_percent string | number

    • heap.current string

      The used heap.

    • heap.percent string | number

    • heap.max string

      The maximum configured heap.

    • ram.current string

      The used machine memory.

    • ram.percent string | number

    • ram.max string

      The total machine memory.

    • file_desc.current string

      The used file descriptors.

    • file_desc.percent string | number

    • file_desc.max string

      The maximum number of file descriptors.

    • cpu string

      The recent system CPU usage as a percentage.

    • load_1m string

      The load average for the most recent minute.

    • load_5m string

      The load average for the last five minutes.

    • load_15m string

      The load average for the last fifteen minutes.

    • uptime string

      The node uptime.

    • node.role string

      The roles of the node. Returned values include c(cold node), d(data node), f(frozen node), h(hot node), i(ingest node), l(machine learning node), m (master eligible node), r(remote cluster client node), s(content node), t(transform node), v(voting-only node), w(warm node),and -(coordinating node only).

    • master string

      Indicates whether the node is the elected master node. Returned values include *(elected master) and -(not elected master).

    • name string
    • completion.size string

      The size of completion.

    • fielddata.memory_size string

      The used fielddata cache.

    • fielddata.evictions string

      The fielddata evictions.

    • query_cache.memory_size string

      The used query cache.

    • query_cache.evictions string

      The query cache evictions.

    • query_cache.hit_count string

      The query cache hit counts.

    • query_cache.miss_count string

      The query cache miss counts.

    • request_cache.memory_size string

      The used request cache.

    • request_cache.evictions string

      The request cache evictions.

    • request_cache.hit_count string

      The request cache hit counts.

    • request_cache.miss_count string

      The request cache miss counts.

    • flush.total string

      The number of flushes.

    • flush.total_time string

      The time spent in flush.

    • get.current string

      The number of current get ops.

    • get.time string

      The time spent in get.

    • get.total string

      The number of get ops.

    • get.exists_time string

      The time spent in successful gets.

    • get.exists_total string

      The number of successful get operations.

    • get.missing_time string

      The time spent in failed gets.

    • get.missing_total string

      The number of failed gets.

    • indexing.delete_current string

      The number of current deletions.

    • indexing.delete_time string

      The time spent in deletions.

    • indexing.delete_total string

      The number of delete operations.

    • indexing.index_current string

      The number of current indexing operations.

    • indexing.index_time string

      The time spent in indexing.

    • indexing.index_total string

      The number of indexing operations.

    • indexing.index_failed string

      The number of failed indexing operations.

    • merges.current string

      The number of current merges.

    • merges.current_docs string

      The number of current merging docs.

    • merges.current_size string

      The size of current merges.

    • merges.total string

      The number of completed merge operations.

    • merges.total_docs string

      The docs merged.

    • merges.total_size string

      The size merged.

    • merges.total_time string

      The time spent in merges.

    • refresh.total string

      The total refreshes.

    • refresh.time string

      The time spent in refreshes.

    • refresh.external_total string

      The total external refreshes.

    • refresh.external_time string

      The time spent in external refreshes.

    • refresh.listeners string

      The number of pending refresh listeners.

    • script.compilations string

      The total script compilations.

    • script.cache_evictions string

      The total compiled scripts evicted from the cache.

    • script.compilation_limit_triggered string

      The script cache compilation limit triggered.

    • search.fetch_current string

      The current fetch phase operations.

    • search.fetch_time string

      The time spent in fetch phase.

    • search.fetch_total string

      The total fetch operations.

    • search.open_contexts string

      The open search contexts.

    • search.query_current string

      The current query phase operations.

    • search.query_time string

      The time spent in query phase.

    • search.query_total string

      The total query phase operations.

    • search.scroll_current string

      The open scroll contexts.

    • search.scroll_time string

      The time scroll contexts held open.

    • search.scroll_total string

      The completed scroll contexts.

    • segments.count string

      The number of segments.

    • segments.memory string

      The memory used by segments.

    • segments.index_writer_memory string

      The memory used by the index writer.

    • segments.version_map_memory string

      The memory used by the version map.

    • segments.fixed_bitset_memory string

      The memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields.

    • suggest.current string

      The number of current suggest operations.

    • suggest.time string

      The time spend in suggest.

    • suggest.total string

      The number of suggest operations.

    • bulk.total_operations string

      The number of bulk shard operations.

    • bulk.total_time string

      The time spend in shard bulk.

    • bulk.total_size_in_bytes string

      The total size in bytes of shard bulk.

    • bulk.avg_time string

      The average time spend in shard bulk.

    • bulk.avg_size_in_bytes string

      The average size in bytes of shard bulk.

GET /_cat/nodes?v=true&h=id,ip,port,v,m&format=json
resp = client.cat.nodes(
    v=True,
    h="id,ip,port,v,m",
    format="json",
)
const response = await client.cat.nodes({
  v: "true",
  h: "id,ip,port,v,m",
  format: "json",
});
response = client.cat.nodes(
  v: "true",
  h: "id,ip,port,v,m",
  format: "json"
)
$resp = $client->cat()->nodes([
    "v" => "true",
    "h" => "id,ip,port,v,m",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/nodes?v=true&h=id,ip,port,v,m&format=json"
client.cat().nodes();
Response examples (200)
A successful response from `GET /_cat/nodes?v=true&format=json`. The `ip`, `heap.percent`, `ram.percent`, `cpu`, and `load_*` columns provide the IP addresses and performance information of each node. The `node.role`, `master`, and `name` columns provide information useful for monitoring an entire cluster, particularly large ones.
[
  {
    "ip": "127.0.0.1",
    "heap.percent": "65",
    "ram.percent": "99",
    "cpu": "42",
    "load_1m": "3.07",
    "load_5m": null,
    "load_15m": null,
    "node.role": "cdfhilmrstw",
    "master": "*",
    "name": "mJw06l1"
  }
]
A successful response from `GET /_cat/nodes?v=true&h=id,ip,port,v,m&format=json`. It returns the `id`, `ip`, `port`, `v` (version), and `m` (master) columns.
[
  {
    "id": "veJR",
    "ip": "127.0.0.1",
    "port": "59938",
    "v": "9.0.0",
    "m": "*"
  }
]

Get pending task information Generally available

GET /_cat/pending_tasks

Get information about cluster-level changes that have not yet taken effect. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • insertOrder string

      The task insertion order.

    • timeInQueue string

      Indicates how long the task has been in queue.

    • priority string

      The task priority.

    • source string

      The task source.

GET /_cat/pending_tasks?v=trueh=insertOrder,timeInQueue,priority,source&format=json
resp = client.cat.pending_tasks(
    v="trueh=insertOrder,timeInQueue,priority,source",
    format="json",
)
const response = await client.cat.pendingTasks({
  v: "trueh=insertOrder,timeInQueue,priority,source",
  format: "json",
});
response = client.cat.pending_tasks(
  v: "trueh=insertOrder,timeInQueue,priority,source",
  format: "json"
)
$resp = $client->cat()->pendingTasks([
    "v" => "trueh=insertOrder,timeInQueue,priority,source",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/pending_tasks?v=trueh=insertOrder,timeInQueue,priority,source&format=json"
client.cat().pendingTasks();
Response examples (200)
A successful response from `GET /_cat/pending_tasks?v=trueh=insertOrder,timeInQueue,priority,source&format=json`.
[
  { "insertOrder": "1685", "timeInQueue": "855ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1686", "timeInQueue": "843ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1693", "timeInQueue": "753ms", "priority": "HIGH", "source": "refresh-mapping [foo][[t]]"},
    { "insertOrder": "1688", "timeInQueue": "816ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1689", "timeInQueue": "802ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1690", "timeInQueue": "787ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1691", "timeInQueue": "773ms", "priority": "HIGH", "source": "update-mapping [foo][t]"}
]

Get plugin information Generally available

GET /_cat/plugins

Get a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • include_bootstrap boolean

    Include bootstrap plugins in the response

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • name string
    • component string

      The component name.

    • version string
    • description string

      The plugin details.

    • type string

      The plugin type.

GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json
resp = client.cat.plugins(
    v=True,
    s="component",
    h="name,component,version,description",
    format="json",
)
const response = await client.cat.plugins({
  v: "true",
  s: "component",
  h: "name,component,version,description",
  format: "json",
});
response = client.cat.plugins(
  v: "true",
  s: "component",
  h: "name,component,version,description",
  format: "json"
)
$resp = $client->cat()->plugins([
    "v" => "true",
    "s" => "component",
    "h" => "name,component,version,description",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/plugins?v=true&s=component&h=name,component,version,description&format=json"
client.cat().plugins();
Response examples (200)
A successful response from `GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json`.
[
  { "name": "U7321H6", "component": "analysis-icu", "version": "8.17.0", "description": "The ICU Analysis plugin integrates the Lucene ICU module into Elasticsearch, adding ICU-related analysis components."},
  {"name": "U7321H6", "component": "analysis-kuromoji",   "verison":  "8.17.0", description: "The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch."},
  {"name" "U7321H6", "component": "analysis-nori", "version":         "8.17.0", "description": "The Korean (nori) Analysis plugin integrates Lucene nori analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-phonetic",   "verison":  "8.17.0", "description": "The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch."},
  {"name": "U7321H6", "component": "analysis-smartcn",   "verison":  "8.17.0", "description": "Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-stempel",   "verison":  "8.17.0", "description": "The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-ukrainian",   "verison":  "8.17.0", "description": "The Ukrainian Analysis plugin integrates the Lucene UkrainianMorfologikAnalyzer into elasticsearch."},
  {"name": "U7321H6", "component": "discovery-azure-classic",   "verison":  "8.17.0", "description": "The Azure Classic Discovery plugin allows to use Azure Classic API for the unicast discovery mechanism"},
  {"name": "U7321H6", "component": "discovery-ec2",   "verison":  "8.17.0", "description": "The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "discovery-gce",   "verison":  "8.17.0", "description": "The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "mapper-annotated-text",   "verison":  "8.17.0", "description": "The Mapper Annotated_text plugin adds support for text fields with markup used to inject annotation tokens into the index."},
  {"name": "U7321H6", "component": "mapper-murmur3",   "verison":  "8.17.0", "description": "The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index."},
  {"name": "U7321H6", "component": "mapper-size",   "verison":  "8.17.0", "description": "The Mapper Size plugin allows document to record their uncompressed size at index time."},
  {"name": "U7321H6", "component": "store-smb",   "verison":  "8.17.0", "description": "The Store SMB plugin adds support for SMB stores."}
]

Get shard recovery information Generally available

GET /_cat/recovery/{index}

All methods and paths for this operation:

GET /_cat/recovery

GET /_cat/recovery/{index}

Get information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • active_only boolean

    If true, the response only includes ongoing shard recoveries.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

  • index string | array[string]

    Comma-separated list or wildcard expression of index names to limit the returned information

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • index (or i, idx): The name of the index.
    • shard (or s, sh): The name of the shard.
    • time (or t, ti, primaryOrReplica): The recovery time elasped.
    • type: The type of recovery, from a peer or a snapshot.
    • stage (or st): The stage of the recovery. Returned values are: INIT, INDEX: recovery of lucene files, either reusing local ones are copying new ones, VERIFY_INDEX: potentially running check index, TRANSLOG: starting up the engine, replaying the translog, FINALIZE: performing final task after all translog ops have been done, DONE
    • source_host (or shost): The host address the index is moving from.
    • source_node (or snode): The node name the index is moving from.
    • target_host (or thost): The host address the index is moving to.
    • target_node (or tnode): The node name the index is moving to.
    • repository (or tnode): The name of the repository being used. if not relevant 'n/a'.
    • snapshot (or snap): The name of the snapshot being used. if not relevant 'n/a'.
    • files (or f): The total number of files to recover.
    • files_recovered (or fr): The number of files currently recovered.
    • files_percent (or fp): The percentage of files currently recovered.
    • files_total (or tf): The total number of files.
    • bytes (or b): The total number of bytes to recover.
    • bytes_recovered (or br): Total number of bytes currently recovered.
    • bytes_percent (or bp): The percentage of bytes currently recovered.
    • bytes_total (or tb): The total number of bytes.
    • translog_ops (or to): The total number of translog ops to recover.
    • translog_ops_recovered (or tor): The total number of translog ops currently recovered.
    • translog_ops_percent (or top): The percentage of translog ops currently recovered.
    • start_time (or start): The start time of the recovery operation.
    • start_time_millis (or start_millis): The start time of the recovery operation in eopch milliseconds.
    • stop_time (or stop): The end time of the recovery operation. If ongoing '1970-01-01T00:00:00.000Z'
    • stop_time_millis (or stop_millis): The end time of the recovery operation in eopch milliseconds. If ongoing '0'

    Values are index, i, idx, shard, s, sh, time, t, ti, primaryOrReplica, type, stage, st, source_host, shost, source_node, snode, target_host, thost, target_node, tnode, repository, snapshot, snap, files, f, files_recovered, fr, files_percent, fp, files_total, tf, bytes, b, bytes_recovered, br, bytes_percent, bp, bytes_total, tb, translog_ops, to, translog_ops_recovered, tor, translog_ops_percent, top, start_time, start, start_time_millis, start_millis, stop_time, stop, stop_time_millis, or stop_millis.

  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string
    • shard string

      The shard name.

    • start_time string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • start_time_millis number

      Time unit for milliseconds

    • stop_time string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • stop_time_millis number

      Time unit for milliseconds

    • time string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • type string

      The recovery type.

    • stage string

      The recovery stage.

    • source_host string

      The source host.

    • source_node string

      The source node name.

    • target_host string

      The target host.

    • target_node string

      The target node name.

    • repository string

      The repository name.

    • snapshot string

      The snapshot name.

    • files string

      The number of files to recover.

    • files_recovered string

      The files recovered.

    • files_percent string | number

    • files_total string

      The total number of files.

    • bytes string

      The number of bytes to recover.

    • bytes_recovered string

      The bytes recovered.

    • bytes_percent string | number

    • bytes_total string

      The total number of bytes.

    • translog_ops string

      The number of translog operations to recover.

    • translog_ops_recovered string

      The translog operations recovered.

    • translog_ops_percent string | number

GET _cat/recovery?v=true&format=json
resp = client.cat.recovery(
    v=True,
    format="json",
)
const response = await client.cat.recovery({
  v: "true",
  format: "json",
});
response = client.cat.recovery(
  v: "true",
  format: "json"
)
$resp = $client->cat()->recovery([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/recovery?v=true&format=json"
client.cat().recovery();
A successful response from `GET _cat/recovery?v=true&format=json`. In this example, the source and target nodes are the same because the recovery type is `store`, meaning they were read from local storage on node start.
[
  {
    "index": "my-index-000001 ",
    "shard": "0",
    "time": "13ms",
    "type": "store",
    "stage": "done",
    "source_host": "n/a",
    "source_node": "n/a",
    "target_host": "127.0.0.1",
    "target_node": "node-0",
    "repository": "n/a",
    "snapshot": "n/a",
    "files": "0",
    "files_recovered": "0",
    "files_percent": "100.0%",
    "files_total": "13",
    "bytes": "0b",
    "bytes_recovered": "0b",
    "bytes_percent": "100.0%",
    "bytes_total": "9928b",
    "translog_ops": "0",
    "translog_ops_recovered": "0",
    "translog_ops_percent": "100.0%"
  }
]
A successful response from `GET _cat/recovery?v=true&h=i,s,t,ty,st,shost,thost,f,fp,b,bp&format=json`. You can retrieve information about an ongoing recovery for example when you increase the replica count of an index and bring another node online to host the replicas. In this example, the recovery type is `peer`, meaning the shard recovered from another node. The `files` and `bytes` are real-time measurements.
[
  {
    "i": "my-index-000001",
    "s": "0",
    "t": "1252ms",
    "ty": "peer",
    "st": "done",
    "shost": "192.168.1.1",
    "thost": "192.168.1.1",
    "f": "0",
    "fp": "100.0%",
    "b": "0b",
    "bp": "100.0%",
  }
]
A successful response from `GET _cat/recovery?v=true&h=i,s,t,ty,st,rep,snap,f,fp,b,bp&format=json`. You can restore backups of an index using the snapshot and restore API. You can use the cat recovery API to get information about a snapshot recovery.
[
  {
    "i": "my-index-000001",
    "s": "0",
    "t": "1978ms",
    "ty": "snapshot",
    "st": "done",
    "rep": "my-repo",
    "snap": "snap-1",
    "f": "79",
    "fp": "8.0%",
    "b": "12086",
    "bp": "9.0%"
  }
]

Get snapshot repository information Generally available; Added in 2.1.0

GET /_cat/repositories

Get a list of snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.

Required authorization

  • Cluster privileges: monitor_snapshot

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique repository identifier.

    • type string

      The repository type.

GET /_cat/repositories?v=true&format=json
resp = client.cat.repositories(
    v=True,
    format="json",
)
const response = await client.cat.repositories({
  v: "true",
  format: "json",
});
response = client.cat.repositories(
  v: "true",
  format: "json"
)
$resp = $client->cat()->repositories([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/repositories?v=true&format=json"
client.cat().repositories();
Response examples (200)
A successful response from `GET /_cat/repositories?v=true&format=json`.
[
  {
    "id": "repo1",
    "type": "fs"
  },
  {
    "id": "repo2",
    "type": "s3"
  }
]

Get segment information Generally available

GET /_cat/segments/{index}

All methods and paths for this operation:

GET /_cat/segments

GET /_cat/segments/{index}

Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • index (or i, idx): The name of the index.
    • shard (or s, sh): The name of the shard.
    • prirep (or p, pr, primaryOrReplica): The shard type. Returned values are 'primary' or 'replica'.
    • ip: IP address of the segment’s shard, such as '127.0.1.1'.
    • segment: The name of the segment, such as '_0'. The segment name is derived from the segment generation and used internally to create file names in the directory of the shard.
    • generation: Generation number, such as '0'. Elasticsearch increments this generation number for each segment written. Elasticsearch then uses this number to derive the segment name.
    • docs.count: The number of documents as reported by Lucene. This excludes deleted documents and counts any nested documents separately from their parents. It also excludes documents which were indexed recently and do not yet belong to a segment.
    • docs.deleted: The number of deleted documents as reported by Lucene, which may be higher or lower than the number of delete operations you have performed. This number excludes deletes that were performed recently and do not yet belong to a segment. Deleted documents are cleaned up by the automatic merge process if it makes sense to do so. Also, Elasticsearch creates extra deleted documents to internally track the recent history of operations on a shard.
    • size: The disk space used by the segment, such as '50kb'.
    • size.memory: The bytes of segment data stored in memory for efficient search, such as '1264'. A value of '-1' indicates Elasticsearch was unable to compute this number.
    • committed: If 'true', the segments is synced to disk. Segments that are synced can survive a hard reboot. If 'false', the data from uncommitted segments is also stored in the transaction log so that Elasticsearch is able to replay changes on the next start.
    • searchable: If 'true', the segment is searchable. If 'false', the segment has most likely been written to disk but needs a refresh to be searchable.
    • version: The version of Lucene used to write the segment.
    • compound: If 'true', the segment is stored in a compound file. This means Lucene merged all files from the segment in a single file to save file descriptors.
    • id: The ID of the node, such as 'k0zy'.

    Values are index, i, idx, shard, s, sh, prirep, p, pr, primaryOrReplica, ip, segment, generation, docs.count, docs.deleted, size, size.memory, committed, searchable, version, compound, or id.

  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string
    • shard string

      The shard name.

    • prirep string

      The shard type: primary or replica.

    • ip string

      The IP address of the node where it lives.

    • id string
    • segment string

      The segment name, which is derived from the segment generation and used internally to create file names in the directory of the shard.

    • generation string

      The segment generation number. Elasticsearch increments this generation number for each segment written then uses this number to derive the segment name.

    • docs.count string

      The number of documents in the segment. This excludes deleted documents and counts any nested documents separately from their parents. It also excludes documents which were indexed recently and do not yet belong to a segment.

    • docs.deleted string

      The number of deleted documents in the segment, which might be higher or lower than the number of delete operations you have performed. This number excludes deletes that were performed recently and do not yet belong to a segment. Deleted documents are cleaned up by the automatic merge process if it makes sense to do so. Also, Elasticsearch creates extra deleted documents to internally track the recent history of operations on a shard.

    • size number | string

    • size.memory number | string

    • committed string

      If true, the segment is synced to disk. Segments that are synced can survive a hard reboot. If false, the data from uncommitted segments is also stored in the transaction log so that Elasticsearch is able to replay changes on the next start.

    • searchable string

      If true, the segment is searchable. If false, the segment has most likely been written to disk but needs a refresh to be searchable.

    • version string
    • compound string

      If true, the segment is stored in a compound file. This means Lucene merged all files from the segment in a single file to save file descriptors.

GET /_cat/segments?v=true&format=json
resp = client.cat.segments(
    v=True,
    format="json",
)
const response = await client.cat.segments({
  v: "true",
  format: "json",
});
response = client.cat.segments(
  v: "true",
  format: "json"
)
$resp = $client->cat()->segments([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/segments?v=true&format=json"
client.cat().segments();
Response examples (200)
A successful response from `GET /_cat/segments?v=true&format=json`.
[
  {
    "index": "test",
    "shard": "0",
    "prirep": "p",
    "ip": "127.0.0.1",
    "segment": "_0",
    "generation": "0",
    "docs.count": "1",
    "docs.deleted": "0",
    "size": "3kb",
    "size.memory": "0",
    "committed": "false",
    "searchable": "true",
    "version": "9.12.0",
    "compound": "true"
  },
  {
    "index": "test1",
    "shard": "0",
    "prirep": "p",
    "ip": "127.0.0.1",
    "segment": "_0",
    "generation": "0",
    "docs.count": "1",
    "docs.deleted": "0",
    "size": "3kb",
    "size.memory": "0",
    "committed": "false",
    "searchable": "true",
    "version": "9.12.0",
    "compound": "true"
  }
]

Get shard information Generally available

GET /_cat/shards/{index}

All methods and paths for this operation:

GET /_cat/shards

GET /_cat/shards/{index}

Get information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

    Supported values include:

    • completion.size (or cs, completionSize): Size of completion. For example: 0b.
    • dataset.size: Disk space used by the shard’s dataset, which may or may not be the size on disk, but includes space used by the shard on object storage. Reported as a size value for example: 5kb.
    • dense_vector.value_count (or dvc, denseVectorCount): Number of indexed dense vectors.
    • docs (or d, dc): Number of documents in shard, for example: 25.
    • fielddata.evictions (or fe, fielddataEvictions): Fielddata cache evictions, for example: 0.
    • fielddata.memory_size (or fm, fielddataMemory): Used fielddata cache memory, for example: 0b.
    • flush.total (or ft, flushTotal): Number of flushes, for example: 1.
    • flush.total_time (or ftt, flushTotalTime): Time spent in flush, for example: 1.
    • get.current (or gc, getCurrent): Number of current get operations, for example: 0.
    • get.exists_time (or geti, getExistsTime): Time spent in successful gets, for example: 14ms.
    • get.exists_total (or geto, getExistsTotal): Number of successful get operations, for example: 2.
    • get.missing_time (or gmti, getMissingTime): Time spent in failed gets, for example: 0s.
    • get.missing_total (or gmto, getMissingTotal): Number of failed get operations, for example: 1.
    • get.time (or gti, getTime): Time spent in get, for example: 14ms.
    • get.total (or gto, getTotal): Number of get operations, for example: 2.
    • id: ID of the node, for example: k0zy.
    • index (or i, idx): Name of the index.
    • indexing.delete_current (or idc, indexingDeleteCurrent): Number of current deletion operations, for example: 0.
    • indexing.delete_time (or idti, indexingDeleteTime): Time spent in deletions, for example: 2ms.
    • indexing.delete_total (or idto, indexingDeleteTotal): Number of deletion operations, for example: 2.
    • indexing.index_current (or iic, indexingIndexCurrent): Number of current indexing operations, for example: 0.
    • indexing.index_failed_due_to_version_conflict (or iifvc, indexingIndexFailedDueToVersionConflict): Number of failed indexing operations due to version conflict, for example: 0.
    • indexing.index_failed (or iif, indexingIndexFailed): Number of failed indexing operations, for example: 0.
    • indexing.index_time (or iiti, indexingIndexTime): Time spent in indexing, such as for example: 134ms.
    • indexing.index_total (or iito, indexingIndexTotal): Number of indexing operations, for example: 1.
    • ip: IP address of the node, for example: 127.0.1.1.
    • merges.current (or mc, mergesCurrent): Number of current merge operations, for example: 0.
    • merges.current_docs (or mcd, mergesCurrentDocs): Number of current merging documents, for example: 0.
    • merges.current_size (or mcs, mergesCurrentSize): Size of current merges, for example: 0b.
    • merges.total (or mt, mergesTotal): Number of completed merge operations, for example: 0.
    • merges.total_docs (or mtd, mergesTotalDocs): Number of merged documents, for example: 0.
    • merges.total_size (or mts, mergesTotalSize): Size of current merges, for example: 0b.
    • merges.total_time (or mtt, mergesTotalTime): Time spent merging documents, for example: 0s.
    • node (or n): Node name, for example: I8hydUG.
    • prirep (or p, pr, primaryOrReplica): Shard type. Returned values are primary or replica.
    • query_cache.evictions (or qce, queryCacheEvictions): Query cache evictions, for example: 0.
    • query_cache.memory_size (or qcm, queryCacheMemory): Used query cache memory, for example: 0b.
    • recoverysource.type (or rs): Type of recovery source.
    • refresh.time (or rti, refreshTime): Time spent in refreshes, for example: 91ms.
    • refresh.total (or rto, refreshTotal): Number of refreshes, for example: 16.
    • search.fetch_current (or sfc, searchFetchCurrent): Current fetch phase operations, for example: 0.
    • search.fetch_time (or sfti, searchFetchTime): Time spent in fetch phase, for example: 37ms.
    • search.fetch_total (or sfto, searchFetchTotal): Number of fetch operations, for example: 7.
    • search.open_contexts (or so, searchOpenContexts): Open search contexts, for example: 0.
    • search.query_current (or sqc, searchQueryCurrent): Current query phase operations, for example: 0.
    • search.query_time (or sqti, searchQueryTime): Time spent in query phase, for example: 43ms.
    • search.query_total (or sqto, searchQueryTotal): Number of query operations, for example: 9.
    • search.scroll_current (or scc, searchScrollCurrent): Open scroll contexts, for example: 2.
    • search.scroll_time (or scti, searchScrollTime): Time scroll contexts held open, for example: 2m.
    • search.scroll_total (or scto, searchScrollTotal): Completed scroll contexts, for example: 1.
    • segments.count (or sc, segmentsCount): Number of segments, for example: 4.
    • segments.fixed_bitset_memory (or sfbm, fixedBitsetMemory): Memory used by fixed bit sets for nested object field types and type filters for types referred in join fields, for example: 1.0kb.
    • segments.index_writer_memory (or siwm, segmentsIndexWriterMemory): Memory used by index writer, for example: 18mb.
    • segments.memory (or sm, segmentsMemory): Memory used by segments, for example: 1.4kb.
    • segments.version_map_memory (or svmm, segmentsVersionMapMemory): Memory used by version map, for example: 1.0kb.
    • seq_no.global_checkpoint (or sqg, globalCheckpoint): Global checkpoint.
    • seq_no.local_checkpoint (or sql, localCheckpoint): Local checkpoint.
    • seq_no.max (or sqm, maxSeqNo): Maximum sequence number.
    • shard (or s, sh): Name of the shard.
    • dsparse_vector.value_count (or svc, sparseVectorCount): Number of indexed sparse vectors.
    • state (or st): State of the shard. Returned values are:
      • INITIALIZING: The shard is recovering from a peer shard or gateway.
      • RELOCATING: The shard is relocating.
      • STARTED: The shard has started.
      • UNASSIGNED: The shard is not assigned to any node.
    • store (or sto): Disk space used by the shard, for example: 5kb.
    • suggest.current (or suc, suggestCurrent): Number of current suggest operations, for example: 0.
    • suggest.time (or suti, suggestTime): Time spent in suggest, for example: 0.
    • suggest.total (or suto, suggestTotal): Number of suggest operations, for example: 0.
    • sync_id: Sync ID of the shard.
    • unassigned.at (or ua): Time at which the shard became unassigned in Coordinated Universal Time (UTC).
    • unassigned.details (or ud): Details about why the shard became unassigned. This does not explain why the shard is currently unassigned. To understand why a shard is not assigned, use the Cluster allocation explain API.
    • unassigned.for (or uf): Time at which the shard was requested to be unassigned in Coordinated Universal Time (UTC).
    • unassigned.reason (or ur): Indicates the reason for the last change to the state of this unassigned shard. This does not explain why the shard is currently unassigned. To understand why a shard is not assigned, use the Cluster allocation explain API. Returned values include:

      • ALLOCATION_FAILED: Unassigned as a result of a failed allocation of the shard.
      • CLUSTER_RECOVERED: Unassigned as a result of a full cluster recovery.
      • DANGLING_INDEX_IMPORTED: Unassigned as a result of importing a dangling index.
      • EXISTING_INDEX_RESTORED: Unassigned as a result of restoring into a closed index.
      • FORCED_EMPTY_PRIMARY: The shard’s allocation was last modified by forcing an empty primary using the Cluster reroute API.
      • INDEX_CLOSED: Unassigned because the index was closed.
      • INDEX_CREATED: Unassigned as a result of an API creation of an index.
      • INDEX_REOPENED: Unassigned as a result of opening a closed index.
      • MANUAL_ALLOCATION: The shard’s allocation was last modified by the Cluster reroute API.
      • NEW_INDEX_RESTORED: Unassigned as a result of restoring into a new index.
      • NODE_LEFT: Unassigned as a result of the node hosting it leaving the cluster.
      • NODE_RESTARTING: Similar to NODE_LEFT, except that the node was registered as restarting using the Node shutdown API.
      • PRIMARY_FAILED: The shard was initializing as a replica, but the primary shard failed before the initialization completed.
      • REALLOCATED_REPLICA: A better replica location is identified and causes the existing replica allocation to be cancelled.
      • REINITIALIZED: When a shard moves from started back to initializing.
      • REPLICA_ADDED: Unassigned as a result of explicit addition of a replica.
      • REROUTE_CANCELLED: Unassigned as a result of explicit cancel reroute command.
  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • master_timeout string

    The period to wait for a connection to the master node.

    Values are -1 or 0.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string

      The index name.

    • shard string

      The shard name.

    • prirep string

      The shard type: primary or replica.

    • state string

      The shard state. Returned values include: INITIALIZING: The shard is recovering from a peer shard or gateway. RELOCATING: The shard is relocating. STARTED: The shard has started. UNASSIGNED: The shard is not assigned to any node.

    • docs string | null

      The number of documents in the shard.

    • store string | null

      The disk space used by the shard.

    • dataset string | null

      total size of dataset (including the cache for partially mounted indices)

    • ip string | null

      The IP address of the node.

    • id string

      The unique identifier for the node.

    • node string | null

      The name of node.

    • sync_id string

      The sync identifier.

    • unassigned.reason string

      The reason for the last change to the state of an unassigned shard. It does not explain why the shard is currently unassigned; use the cluster allocation explain API for that information. Returned values include: ALLOCATION_FAILED: Unassigned as a result of a failed allocation of the shard. CLUSTER_RECOVERED: Unassigned as a result of a full cluster recovery. DANGLING_INDEX_IMPORTED: Unassigned as a result of importing a dangling index. EXISTING_INDEX_RESTORED: Unassigned as a result of restoring into a closed index. FORCED_EMPTY_PRIMARY: The shard’s allocation was last modified by forcing an empty primary using the cluster reroute API. INDEX_CLOSED: Unassigned because the index was closed. INDEX_CREATED: Unassigned as a result of an API creation of an index. INDEX_REOPENED: Unassigned as a result of opening a closed index. MANUAL_ALLOCATION: The shard’s allocation was last modified by the cluster reroute API. NEW_INDEX_RESTORED: Unassigned as a result of restoring into a new index. NODE_LEFT: Unassigned as a result of the node hosting it leaving the cluster. NODE_RESTARTING: Similar to NODE_LEFT, except that the node was registered as restarting using the node shutdown API. PRIMARY_FAILED: The shard was initializing as a replica, but the primary shard failed before the initialization completed. REALLOCATED_REPLICA: A better replica location is identified and causes the existing replica allocation to be cancelled. REINITIALIZED: When a shard moves from started back to initializing. REPLICA_ADDED: Unassigned as a result of explicit addition of a replica. REROUTE_CANCELLED: Unassigned as a result of explicit cancel reroute command.

    • unassigned.at string

      The time at which the shard became unassigned in Coordinated Universal Time (UTC).

    • unassigned.for string

      The time at which the shard was requested to be unassigned in Coordinated Universal Time (UTC).

    • unassigned.details string

      Additional details as to why the shard became unassigned. It does not explain why the shard is not assigned; use the cluster allocation explain API for that information.

    • recoverysource.type string

      The type of recovery source.

    • completion.size string

      The size of completion.

    • fielddata.memory_size string

      The used fielddata cache memory.

    • fielddata.evictions string

      The fielddata cache evictions.

    • query_cache.memory_size string

      The used query cache memory.

    • query_cache.evictions string

      The query cache evictions.

    • flush.total string

      The number of flushes.

    • flush.total_time string

      The time spent in flush.

    • get.current string

      The number of current get operations.

    • get.time string

      The time spent in get operations.

    • get.total string

      The number of get operations.

    • get.exists_time string

      The time spent in successful get operations.

    • get.exists_total string

      The number of successful get operations.

    • get.missing_time string

      The time spent in failed get operations.

    • get.missing_total string

      The number of failed get operations.

    • indexing.delete_current string

      The number of current deletion operations.

    • indexing.delete_time string

      The time spent in deletion operations.

    • indexing.delete_total string

      The number of delete operations.

    • indexing.index_current string

      The number of current indexing operations.

    • indexing.index_time string

      The time spent in indexing operations.

    • indexing.index_total string

      The number of indexing operations.

    • indexing.index_failed string

      The number of failed indexing operations.

    • merges.current string

      The number of current merge operations.

    • merges.current_docs string

      The number of current merging documents.

    • merges.current_size string

      The size of current merge operations.

    • merges.total string

      The number of completed merge operations.

    • merges.total_docs string

      The nuber of merged documents.

    • merges.total_size string

      The size of current merges.

    • merges.total_time string

      The time spent merging documents.

    • refresh.total string

      The total number of refreshes.

    • refresh.time string

      The time spent in refreshes.

    • refresh.external_total string

      The total nunber of external refreshes.

    • refresh.external_time string

      The time spent in external refreshes.

    • refresh.listeners string

      The number of pending refresh listeners.

    • search.fetch_current string

      The current fetch phase operations.

    • search.fetch_time string

      The time spent in fetch phase.

    • search.fetch_total string

      The total number of fetch operations.

    • search.open_contexts string

      The number of open search contexts.

    • search.query_current string

      The current query phase operations.

    • search.query_time string

      The time spent in query phase.

    • search.query_total string

      The total number of query phase operations.

    • search.scroll_current string

      The open scroll contexts.

    • search.scroll_time string

      The time scroll contexts were held open.

    • search.scroll_total string

      The number of completed scroll contexts.

    • segments.count string

      The number of segments.

    • segments.memory string

      The memory used by segments.

    • segments.index_writer_memory string

      The memory used by the index writer.

    • segments.version_map_memory string

      The memory used by the version map.

    • segments.fixed_bitset_memory string

      The memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields.

    • seq_no.max string

      The maximum sequence number.

    • seq_no.local_checkpoint string

      The local checkpoint.

    • seq_no.global_checkpoint string

      The global checkpoint.

    • warmer.current string

      The number of current warmer operations.

    • warmer.total string

      The total number of warmer operations.

    • warmer.total_time string

      The time spent in warmer operations.

    • path.data string

      The shard data path.

    • path.state string

      The shard state path.

    • bulk.total_operations string

      The number of bulk shard operations.

    • bulk.total_time string

      The time spent in shard bulk operations.

    • bulk.total_size_in_bytes string

      The total size in bytes of shard bulk operations.

    • bulk.avg_time string

      The average time spent in shard bulk operations.

    • bulk.avg_size_in_bytes string

      The average size in bytes of shard bulk operations.

GET _cat/shards?format=json
resp = client.cat.shards(
    format="json",
)
const response = await client.cat.shards({
  format: "json",
});
response = client.cat.shards(
  format: "json"
)
$resp = $client->cat()->shards([
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/shards?format=json"
client.cat().shards();
A successful response from `GET _cat/shards?format=json`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards/my-index-*?format=json`. It returns information for any data streams or indices beginning with `my-index-`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards?format=json`. The `RELOCATING` value in the `state` column indicates the index shard is relocating.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "RELOCATING",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA -> -> 192.168.56.30 bGG90GE"
  }
]
A successful response from `GET _cat/shards?format=json`. Before a shard is available for use, it goes through an `INITIALIZING` state. You can use the cat shards API to see which shards are initializing.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "INITIALIZING",
    "docs": "0",
    "store": "14.3mb",
    "dataset": "249b",
    "ip": "192.168.56.30",
    "node": "bGG90GE"
  }
]
A successful response from `GET _cat/shards?h=index,shard,prirep,state,unassigned.reason&format=json`. It includes the `unassigned.reason` column, which indicates why a shard is unassigned.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.10 H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.30 bGG90GE"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.20 I8hydUG"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "UNASSIGNED",
    "unassigned.reason": "ALLOCATION_FAILED"
  }
]

Get snapshot information Generally available; Added in 2.1.0

GET /_cat/snapshots/{repository}

All methods and paths for this operation:

GET /_cat/snapshots

GET /_cat/snapshots/{repository}

Get information about the snapshots stored in one or more repositories. A snapshot is a backup of an index or running Elasticsearch cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.

Required authorization

  • Cluster privileges: monitor_snapshot

Path parameters

  • repository string | array[string] Required

    A comma-separated list of snapshot repositories used to limit the request. Accepts wildcard expressions. _all returns all repositories. If any repository fails during the request, Elasticsearch returns an error.

Query parameters

  • ignore_unavailable boolean

    If true, the response does not include information from unavailable snapshots.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • id (or snapshot): The ID of the snapshot, such as 'snap1'.
    • repository (or re, repo): The name of the repository, such as 'repo1'.
    • status (or s): State of the snapshot process. Returned values are: 'FAILED': The snapshot process failed. 'INCOMPATIBLE': The snapshot process is incompatible with the current cluster version. 'IN_PROGRESS': The snapshot process started but has not completed. 'PARTIAL': The snapshot process completed with a partial success. 'SUCCESS': The snapshot process completed with a full success.
    • start_epoch (or ste, startEpoch): The unix epoch time at which the snapshot process started.
    • start_time (or sti, startTime): 'HH:MM:SS' time at which the snapshot process started.
    • end_epoch (or ete, endEpoch): The unix epoch time at which the snapshot process ended.
    • end_time (or eti, endTime): 'HH:MM:SS' time at which the snapshot process ended.
    • duration (or dur): The time it took the snapshot process to complete in time units.
    • indices (or i): The number of indices in the snapshot.
    • successful_shards (or ss): The number of successful shards in the snapshot.
    • failed_shards (or fs): The number of failed shards in the snapshot.
    • total_shards (or ts): The total number of shards in the snapshot.
    • reason (or r): The reason for any snapshot failures.

    Values are id, snapshot, repository, re, repo, status, s, start_epoch, ste, startEpoch, start_time, sti, startTime, end_epoch, ete, endEpoch, end_time, eti, endTime, duration, dur, indices, i, successful_shards, ss, failed_shards, fs, total_shards, ts, reason, or r.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique identifier for the snapshot.

    • repository string

      The repository name.

    • status string

      The state of the snapshot process. Returned values include: FAILED: The snapshot process failed. INCOMPATIBLE: The snapshot process is incompatible with the current cluster version. IN_PROGRESS: The snapshot process started but has not completed. PARTIAL: The snapshot process completed with a partial success. SUCCESS: The snapshot process completed with a full success.

    • start_epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      One of:

      Time unit for seconds

    • start_time string | object

      A time of day, expressed either as hh:mm, noon, midnight, or an hour/minutes structure.

      One of:
    • end_epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      One of:

      Time unit for seconds

    • end_time string

      Time of day, expressed as HH:MM:SS

    • duration string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • indices string

      The number of indices in the snapshot.

    • successful_shards string

      The number of successful shards in the snapshot.

    • failed_shards string

      The number of failed shards in the snapshot.

    • total_shards string

      The total number of shards in the snapshot.

    • reason string

      The reason for any snapshot failures.

GET /_cat/snapshots/{repository}
GET /_cat/snapshots/repo1?v=true&s=id&format=json
resp = client.cat.snapshots(
    repository="repo1",
    v=True,
    s="id",
    format="json",
)
const response = await client.cat.snapshots({
  repository: "repo1",
  v: "true",
  s: "id",
  format: "json",
});
response = client.cat.snapshots(
  repository: "repo1",
  v: "true",
  s: "id",
  format: "json"
)
$resp = $client->cat()->snapshots([
    "repository" => "repo1",
    "v" => "true",
    "s" => "id",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/snapshots/repo1?v=true&s=id&format=json"
client.cat().snapshots();
Response examples (200)
A successful response from `GET /_cat/snapshots/repo1?v=true&s=id&format=json`.
[
  {
    "id": "snap1",
    "repository": "repo1",
    "status": "FAILED",
    "start_epoch": "1445616705",
    "start_time": "18:11:45",
    "end_epoch": "1445616978",
    "end_time": "18:16:18",
    "duration": "4.6m",
    "indices": "1",
    "successful_shards": "4",
    "failed_shards": "1",
    "total_shards": "5"
  },
  {
    "id": "snap2",
    "repository": "repo1",
    "status": "SUCCESS",
    "start_epoch": "1445634298",
    "start_time": "23:04:58",
    "end_epoch": "1445634672",
    "end_time": "23:11:12",
    "duration": "6.2m",
    "indices": "2",
    "successful_shards": "10",
    "failed_shards": "0",
    "total_shards": "10"
  }
]

Get task information Technical preview; Added in 5.0.0

GET /_cat/tasks

Get information about tasks currently running in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • actions array[string]

    The task action names, which are used to limit the response.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

  • nodes array[string]

    Unique node identifiers, which are used to limit the response.

  • parent_task_id string

    The parent task identifier, which is used to limit the response.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_completion boolean

    If true, the request blocks until the task has completed.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • action string

      The task action.

    • task_id string
    • parent_task_id string

      The parent task identifier.

    • type string

      The task type.

    • start_time string

      The start time in milliseconds.

    • timestamp string

      The start time in HH:MM:SS format.

    • running_time_ns string

      The running time in nanoseconds.

    • running_time string

      The running time.

    • node_id string
    • ip string

      The IP address for the node.

    • port string

      The bound transport port for the node.

    • node string

      The node name.

    • version string
    • x_opaque_id string

      The X-Opaque-ID header.

    • description string

      The task action description.

GET _cat/tasks?v=true&format=json
resp = client.cat.tasks(
    v=True,
    format="json",
)
const response = await client.cat.tasks({
  v: "true",
  format: "json",
});
response = client.cat.tasks(
  v: "true",
  format: "json"
)
$resp = $client->cat()->tasks([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/tasks?v=true&format=json"
client.cat().tasks();
Response examples (200)
A successful response from `GET _cat/tasks?v=true&format=json`.
[
  {
    "action": "cluster:monitor/tasks/lists[n]",
    "task_id": "oTUltX4IQMOUUVeiohTt8A:124",
    "parent_task_id": "oTUltX4IQMOUUVeiohTt8A:123",
    "type": "direct",
    "start_time": "1458585884904",
    "timestamp": "01:48:24",
    "running_time": "44.1micros",
    "ip": "127.0.0.1:9300",
    "node": "oTUltX4IQMOUUVeiohTt8A"
  },
  {
    "action": "cluster:monitor/tasks/lists",
    "task_id": "oTUltX4IQMOUUVeiohTt8A:123",
    "parent_task_id": "-",
    "type": "transport",
    "start_time": "1458585884904",
    "timestamp": "01:48:24",
    "running_time": "186.2micros",
    "ip": "127.0.0.1:9300",
    "node": "oTUltX4IQMOUUVeiohTt8A"
  }
]

Get index template information Generally available; Added in 5.2.0

GET /_cat/templates/{name}

All methods and paths for this operation:

GET /_cat/templates

GET /_cat/templates/{name}

Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • name string Required

    The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • name string
    • index_patterns string

      The template index patterns.

    • order string

      The template application order or priority number.

    • version string | null

      The template version.

    • composed_of string

      The component templates that comprise the index template.

GET _cat/templates/my-template-*?v=true&s=name&format=json
resp = client.cat.templates(
    name="my-template-*",
    v=True,
    s="name",
    format="json",
)
const response = await client.cat.templates({
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json",
});
response = client.cat.templates(
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json"
)
$resp = $client->cat()->templates([
    "name" => "my-template-*",
    "v" => "true",
    "s" => "name",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/templates/my-template-*?v=true&s=name&format=json"
client.cat().templates();
Response examples (200)
A successful response from `GET _cat/templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-0",
    "index_patterns": "[te*]",
    "order": "500",
    "version": null,
    "composed_of": "[]"
  },
  {
    "name": "my-template-1",
    "index_patterns": "[tea*]",
    "order": "501",
    "version": null,
    "composed_of": "[]"
  },
  {
    "name": "my-template-2",
    "index_patterns": "[teak*]",
    "order": "502",
    "version": "7",
    "composed_of": "[]"
  }
]