Skip to content

Latest commit

 

History

History
62 lines (37 loc) · 1.13 KB

File metadata and controls

62 lines (37 loc) · 1.13 KB

google.generativeai.protos.CountTokensResponse

View source on GitHub

A response from CountTokens.

It returns the model's token_count for the prompt.

Attributes

total_tokens

int

The number of tokens that the Model tokenizes the prompt into. Always non-negative.

cached_content_token_count

int

Number of tokens in the cached part of the prompt (the cached content).