Skip to content

Latest commit

 

History

History
177 lines (118 loc) · 4.1 KB

File metadata and controls

177 lines (118 loc) · 4.1 KB

google.generativeai.protos.GenerateAnswerRequest

View source on GitHub

Request to generate a grounded answer from the Model.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

Attributes

inline_passages

google.ai.generativelanguage.GroundingPassages

Passages provided inline with the request.

This field is a member of oneof_ grounding_source.

semantic_retriever

google.ai.generativelanguage.SemanticRetrieverConfig

Content retrieved from resources created via the Semantic Retriever API.

This field is a member of oneof_ grounding_source.

model

str

Required. The name of the Model to use for generating the grounded response.

Format: model=models/{model}.

contents

MutableSequence[google.ai.generativelanguage.Content]

Required. The content of the current conversation with the Model. For single-turn queries, this is a single question to answer. For multi-turn queries, this is a repeated field that contains conversation history and the last Content in the list containing the question.

Note: GenerateAnswer only supports queries in English.

answer_style

google.ai.generativelanguage.GenerateAnswerRequest.AnswerStyle

Required. Style in which answers should be returned.

safety_settings

MutableSequence[google.ai.generativelanguage.SafetySetting]

Optional. A list of unique SafetySetting instances for blocking unsafe content.

This will be enforced on the GenerateAnswerRequest.contents and GenerateAnswerResponse.candidate. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safety_settings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide <https://ai.google.dev/gemini-api/docs/safety-settings>__ for detailed information on available safety settings. Also refer to the Safety guidance <https://ai.google.dev/gemini-api/docs/safety-guidance>__ to learn how to incorporate safety considerations in your AI applications.

temperature

float

Optional. Controls the randomness of the output.

Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model. A low temperature (~0.2) is usually recommended for Attributed-Question-Answering use cases.

Child Classes

class AnswerStyle