Gemini AI - Prompting Guide
Gemini AI - Prompting Guide
An effective and efficient way to customize model behavior is to provide it with clear and specific instructions. Instructions can
be in the form of a question, step-by-step tasks, or as complex as mapping out a user's experience and mindset.
Input
Input is the required text in the prompt that you want the model to provide a response to. Inputs can be a question that the model
answers (question input), a task the model performs (task input), an entity the model operates on (entity input), or partial input
that the model completes or continues (completion input).
Input
Prompt Generated output
type
Question
What's a good name for a flower shop Here are 10 names for a flower shop
that specializes in selling bouquets specializing in dried flowers:
of
dried flowers? Create a list of 5 1. Everlasting Blooms
options with just the names. 2. Dried & Delightful
3. The Petal Preserve
4. Whispers of Wildflowers
5. Forever Blooms
Task Give me a simple list of just the 1. **Tent:** Your shelter from the
things that I must bring on a camping elements.
trip. The list should have 5 items. 2. **Sleeping Bag:** Stay warm and
comfortable.
3. **Sleeping Pad:** Insulation and
cushioning for a good night's sleep.
4. **Headlamp or Flashlight:** Light
for nighttime activities.
5. **First Aid Kit:** Be prepared for
minor injuries.
Entity
Classify the following items as Here's the classification:
[large, small]:
Elephant * **Elephant:** Large
Mouse * **Mouse:** Small
Snail * **Snail:** Small
Generative language models work like an advanced auto completion tool. When you provide partial content, the model can
provide the rest of the content or what it thinks is a continuation of that content as a response. When doing so, if you include any
examples or context, the model can take those examples or context into account.
The following example provides a prompt with an instruction and an entity input:
Prompt:
For the given order, return a JSON object that has the fields cheeseburger, hamburger, fries, or
drink, with the value being the quantity.
Response:
{
"cheeseburger": 0,
"hamburger": 1,
"fries": 0,
"drink": 1
}
(gemini-2.5-flash)
While the model did as prompted, writing out the instructions in natural language can sometimes be challenging and it leaves a
lot to the model's interpretation. For example, a restaurants menu might contain many items. To reduce the size of the JSON
response, you probably want to omit the items that weren't ordered. In this case, you can give an example and a response prefix
and let the model complete it:
Prompt:
Response:
```
{
"hamburger": 2,
"drink": 1,
"fries": 1
}
```
(gemini-2.5-flash)
Notice how "cheeseburger" was excluded from the output because it wasn't a part of the order.
Constraints
Specify any constraints on reading the prompt or generating a response. You can tell the model what to do and not to do. For
example, you can specify a constraint in the prompt on how long you want a summary to be:
Prompt:
Response:
Exploiting quantum mechanical phenomena, quantum computers can perform calculations exponentially
faster than traditional computers for potential applications like cybersecurity and meteorology,
although they are currently largely experimental and impractical.
(gemini-2.5-flash)
Response format
You can give instructions that specify the format of the response. For example, you can ask for the response to be formatted as a
table, bulleted list, elevator pitch, keywords, sentence, or paragraph. The following system instruction tells the model to be more
conversational in its response:
System instruction
All questions should be answered comprehensively with details, unless the user requests a concise
response specifically.
Prompt
Response:
Starting a business selling DVDs in 2025 presents significant challenges due to the overwhelming
dominance of digital streaming and on-demand services; a "smart" approach would therefore require a
highly specialized niche, a strong value proposition beyond just the disc, and likely a focus on
community or collector markets rather than mainstream consumption.
Here's a breakdown of what a smart DVD business in 2025 would likely entail:
1. Extreme Niche Specialization: You cannot compete with streaming libraries. Your business must focus
on what digital doesn't offer easily or at all. This could include:
* Out-of-Print & Rare Titles: Sourcing films or TV shows that have never been released digitally or are
currently unavailable on major streaming platforms.
....
(gemini-2.5-flash)
The completion strategy can also help format the response. The following example prompts the model to create an essay outline:
Prompt:
Response:
I. Introduction
A. Hook: Start with a captivating fact or description about hummingbirds (e.g., their speed, iridescent
colors, or tiny size).
B. Background: Briefly introduce hummingbirds – where they are found (Americas), their reputation as
unique birds.
C. Thesis Statement: State the main point of the essay, which could be about the remarkable
adaptations, behaviors, and ecological importance of hummingbirds.
(gemini-2.5-flash)
The prompt didn't specify the format for the outline and the model chose a format for you. To get the model to return an outline in
a specific format, you can add text that represents the start of the outline and let the model complete it based on the pattern that
you initiated.
Prompt:
Response:
Okay, here is an outline for an essay about hummingbirds, starting from the introduction you provided:
I. Introduction
* Hook: Start with a captivating fact or description about hummingbirds (e.g., their speed, iridescent
colors, or tiny size).
* Background: Briefly introduce hummingbirds – where they are found (Americas), their reputation as
unique birds, and their general characteristics.
* Thesis Statement: State the main point of the essay, focusing on the remarkable adaptations,
behaviors, and ecological significance that make hummingbirds extraordinary.
.....
(gemini-2.5-flash)
We recommend to always include few-shot examples in your prompts. Prompts without few-shot examples are likely to be less
effective. In fact, you can remove instructions from your prompt if your examples are clear enough in showing the task at hand.
The following zero-shot prompt asks the model to choose the best explanation.
Prompt:
Response:
(gemini-2.5-flash)
Ifyour use case requires the model to produce concise responses, you can include examples in the prompt that give preference
to concise responses.
The following prompt provides two examples that show preference to the shorter explanations. In the response, you can see that
the examples guided the model to choose the shorter explanation ( Explanation2 ) as opposed to the longer explanation
( Explanation1 ) like it did previously.
Prompt:
Below are some examples showing a question, explanation, and answer format:
Now, Answer the following question given the example formats above:
Response:
Answer: Explanation2
(gemini-2.5-flash)
⛔ Negative pattern:
Example
✅ Positive pattern:
Example
Consistent formatting
Make sure that the structure and formatting of few-shot examples are the same to avoid responses with undesired formats. One
of the primary objectives of adding few-shot examples in prompts is to show the model the response format. Therefore, it is
essential to ensure a consistent format across all examples, especially paying attention to XML tags, white spaces, newlines, and
example splitters.
Add context
You can include instructions and information in a prompt that the model needs to solve a problem, instead of assuming that the
model has all of the required information. This contextual information helps the model understand the constraints and details of
what you're asking for it to do.
The following example asks the model to give troubleshooting guidance for a router:
Prompt:
Response:
A yellow blinking light on your Google Wifi router usually indicates a problem
with the internet connection. Here are some steps to troubleshoot and fix the
issue:
1. Restart your router and modem:
- Unplug your modem from the power outlet and wait for 30 seconds.
- Plug your modem back in and wait for it to fully boot up.
- Unplug your Google Wifi router and wait for 30 seconds.
- Plug your Google Wifi router back in and wait for it to fully boot up.
.....
(gemini-2.5-flash)
The response looks like generic troubleshooting information that's not specific to the router or the status of the LED indicator
lights.
To customize the response for the specific router, you can add to the prompt the router's troubleshooting guide as context for it
Prompt:
Answer the question using the text below. Respond with only the text provided.
Question: What should I do to fix my disconnected wifi? The light on my Google Wifi router is yellow
and blinking slowly.
Text:
Color: Slowly pulsing yellow
What it means: There is a network error.
What to do:
Check that the Ethernet cable is connected to both your router and your modem and both devices are
turned on. You might need to unplug and plug in each device again.
Response:
Check that the Ethernet cable is connected to both your router and your modem and both devices are
turned on. You might need to unplug and plug in each device again.
(gemini-2.5-flash)
Add prefixes
A prefix is a word or phrase that you add to the prompt content that can serve several purposes, depending on where you put the
prefix:
Input prefix: Adding a prefix to the input signals semantically meaningful parts of the input to the model. For example, the
prefixes "English:" and "French:" demarcate two different languages.
Output prefix: Even though the output is generated by the model, you can add a prefix for the output in the prompt. The
output prefix gives the model information about what's expected as a response. For example, the output prefix "JSON:"
signals to the model that the output should be in JSON format.
Example prefix: In few-shot prompts, adding prefixes to the examples provides labels that the model can use when
generating the output, which makes it easier to parse output content.
In the following example, "Text:" is the input prefix and "The answer is:" is the output prefix.
Prompt:
Response:
(gemini-2.5-flash)
Break down instructions: Instead of having many instructions in one prompt, create one prompt per instruction. You can
choose which prompt to process based on the user's input.
Chain prompts: For complex tasks that involve multiple sequential steps, make each step a prompt and chain the prompts
together in a sequence. In this sequential chain of prompts, the output of one prompt in the sequence becomes the input of
the next prompt. The output of the last prompt in the sequence is the final output.
Aggregate responses: Aggregation is when you want to perform different parallel tasks on different portions of the data
and aggregate the results to produce the final output. For example, you can tell the model to perform one operation on the
first part of the data, perform another operation on the rest of the data and aggregate the results.
Max output tokens: Specifies the maximum number of tokens that can be generated in the response. A token is
approximately four characters. 100 tokens correspond to roughly 60-80 words.
Temperature: The temperature controls the degree of randomness in token selection. The temperature is used for sampling
during response generation, which occurs when topP and topK are applied. Lower temperatures are good for prompts
that require a more deterministic or less open-ended response, while higher temperatures can lead to more diverse or
creative results. A temperature of 0 is deterministic, meaning that the highest probability response is always selected.
topK : The topK parameter changes how the model selects tokens for output. A topK of 1 means the selected token is
the most probable among all the tokens in the model's vocabulary (also called greedy decoding), while a topK of 3 means
that the next token is selected from among the 3 most probable using the temperature. For each token selection step, the
topK tokens with the highest probabilities are sampled. Tokens are then further filtered based on topP with the final token
selected using temperature sampling.
topP : The topP parameter changes how the model selects tokens for output. Tokens are selected from the most to least
probable until the sum of their probabilities equals the topP value. For example, if tokens A, B, and C have a probability of
0.3, 0.2, and 0.1 and the topP value is 0.5, then the model will select either A or B as the next token by using the
temperature and exclude C as a candidate. The default topP value is 0.95.
stop_sequences : Set a stop sequence to tell the model to stop generating content. A stop sequence can be any sequence
of characters. Try to avoid using a sequence of characters that may appear in the generated content.
Use different phrasing: Using different words or phrasing in your prompts often yields different responses from the model
even though they all mean the same thing. If you're not getting the expected results from your prompt, try rephrasing it.
Version 1:
How do I bake a pie?
Version 2:
Suggest a recipe for a pie.
Version 3:
What's a good pie recipe?
Switch to an analogous task: If you can't get the model to follow your instructions for a task, try giving it instructions for an
analogous task that achieves the same result.
This prompt tells the model to categorize a book by using predefined categories:
Prompt:
Response:
Here's why:
* **Mythology:** The Odyssey tells the story of Odysseus, a hero from Greek mythology, and his
journey home after the Trojan War. It features gods, monsters, and supernatural events common
to
Greek mythology.
.....
(gemini-2.5-flash)
The response is correct, but the model didn't stay within the bounds of the options. You also want to model to just respond
with one of the options instead of in a full sentence. In this case, you can rephrase the instructions as a multiple choice
question and ask the model to choose an option.
Prompt:
Multiple choice problem: Which of the following options describes the book The Odyssey?
Options:
thriller
sci-fi
mythology
biography
Response:
(gemini-2.5-flash)
Change the order of prompt content: The order of the content in the prompt can sometimes affect the response. Try
changing the content order and see how that affects the response.
Version 1 :
[ examples ]
[ context ]
[ input ]
Version 2 :
[ input ]
[ examples ]
[ context ]
Version 3 :
[ examples ]
[ input ]
[ context ]