|
A response from CountTokens
.
It returns the model's token_count
for the prompt
.
The number of tokens that the |
|
Number of tokens in the cached part of the prompt (the cached content). |
|
A response from CountTokens
.
It returns the model's token_count
for the prompt
.
The number of tokens that the |
|
Number of tokens in the cached part of the prompt (the cached content). |