|
Request to generate a message response from the model.
Required. The name of the model to use. Format: |
|
Required. The structured textual input given to the model as a prompt. Given a prompt, the model will return what it predicts is the next message in the discussion. |
|
Optional. Controls the randomness of the output. Values can range over |
|
Optional. The number of generated response messages to return. This value must be between |
|
Optional. The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Nucleus sampling considers the smallest set of tokens whose
probability sum is at least |
|
Optional. The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Top-k sampling considers the set of |