Skip to content

What is the role of ReasoningItem #480

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ShinnosukeUesaka opened this issue Apr 11, 2025 · 11 comments
Open

What is the role of ReasoningItem #480

ShinnosukeUesaka opened this issue Apr 11, 2025 · 11 comments
Labels
question Question about using the SDK

Comments

@ShinnosukeUesaka
Copy link

ShinnosukeUesaka commented Apr 11, 2025

Question

I would like the model to output reasoning trace before performing some action.
I do not understand how ReasoningItem can be retrieved. Is ReasoningItem being used in the codebase?
Is there a way I can prompt the model to output ReasoningItem?

@ShinnosukeUesaka ShinnosukeUesaka added the question Question about using the SDK label Apr 11, 2025
@1164974997
Copy link

same question with me, openai agent can't output the reasoning content

@ShinnosukeUesaka
Copy link
Author

@rm-openai

@rm-openai
Copy link
Collaborator

It's used for models that reason. Try this, for example:

import asyncio

from agents import Agent, Runner

async def main():
    agent = Agent(name="Assistant", instructions="You are a helpful assistant.", model="o4-mini")

    result = Runner.run_streamed(
        agent, input="Largest city in the 3rd largest country in the world?"
    )
    async for event in result.stream_events():
        if event.type == "raw_response_event":
            print(event.data, "\n\n")


if __name__ == "__main__":
    asyncio.run(main())

@1164974997
Copy link

1164974997 commented Apr 20, 2025

@rm-openai I tested your code, but the issue persists. The summary field of ResponseReasoningItem remains empty ([]). Even after modifying the question to make it more complex, the logs still contain no reasoning content.

ResponseCreatedEvent(response=Response(id='resp_68046cf29b848192b812876897ea84240a1b1d6f33e93635', created_at=1745120498.0, error=None, incomplete_details=None, instructions='You are a helpful assistant.', metadata={}, model='o4-mini-2025-04-16', object='response', output=[], parallel_tool_calls=True, temperature=1.0, tool_choice='auto', tools=[], top_p=1.0, max_output_tokens=None, previous_response_id=None, reasoning=Reasoning(effort='medium', generate_summary=None, summary=None), status='in_progress', text=ResponseTextConfig(format=ResponseFormatText(type='text')), truncation='disabled', usage=None, user=None, service_tier='auto', store=True), type='response.created') 


ResponseInProgressEvent(response=Response(id='resp_68046cf29b848192b812876897ea84240a1b1d6f33e93635', created_at=1745120498.0, error=None, incomplete_details=None, instructions='You are a helpful assistant.', metadata={}, model='o4-mini-2025-04-16', object='response', output=[], parallel_tool_calls=True, temperature=1.0, tool_choice='auto', tools=[], top_p=1.0, max_output_tokens=None, previous_response_id=None, reasoning=Reasoning(effort='medium', generate_summary=None, summary=None), status='in_progress', text=ResponseTextConfig(format=ResponseFormatText(type='text')), truncation='disabled', usage=None, user=None, service_tier='auto', store=True), type='response.in_progress') 


ResponseOutputItemAddedEvent(item=ResponseReasoningItem(id='rs_68046cf346b481929701b728dd0c2def0a1b1d6f33e93635', summary=[], type='reasoning', status=None), output_index=0, type='response.output_item.added') 


ResponseOutputItemDoneEvent(item=ResponseReasoningItem(id='rs_68046cf346b481929701b728dd0c2def0a1b1d6f33e93635', summary=[], type='reasoning', status=None), output_index=0, type='response.output_item.done') 


ResponseOutputItemAddedEvent(item=ResponseOutputMessage(id='msg_68046cf57be48192ac37f343d847a6ae0a1b1d6f33e93635', content=[], role='assistant', status='in_progress', type='message'), output_index=1, type='response.output_item.added') 


ResponseContentPartAddedEvent(content_index=0, item_id='msg_68046cf57be48192ac37f343d847a6ae0a1b1d6f33e93635', output_index=1, part=ResponseOutputText(annotations=[], text='', type='output_text'), type='response.content_part.added') 


ResponseTextDeltaEvent(content_index=0, delta='The', item_id='msg_68046cf57be48192ac37f343d847a6ae0a1b1d6f33e93635', output_index=1, type='response.output_text.delta') 


ResponseTextDeltaEvent(content_index=0, delta=' third', item_id='msg_68046cf57be48192ac37f343d847a6ae0a1b1d6f33e93635', output_index=1, type='response.output_text.delta') 


ResponseTextDeltaEvent(content_index=0, delta='‑', item_id='msg_68046cf57be48192ac37f343d847a6ae0a1b1d6f33e93635', output_index=1, type='response.output_text.delta') 


ResponseTextDeltaEvent(content_index=0, delta='largest', item_id='msg_68046cf57be48192ac37f343d847a6ae0a1b1d6f33e93635', output_index=1, type='response.output_text.delta') 

@rm-openai
Copy link
Collaborator

@1164974997 oops you're totally right, you'll need to include the reasoning param in ModelSettings. Here's a simpler example (will be a type error unless you're on latest openai, but will work):

import asyncio

from agents import Agent, ModelSettings, Runner


async def main():
    agent = Agent(
        name="Assistant",
        instructions="You are a helpful assistant.",
        model="o4-mini",
        model_settings=ModelSettings(reasoning={"summary": "auto"}),
    )

    result = await Runner.run(agent, input="Largest city in the 3rd largest country in the world?")
    for item in result.new_items:
        print(item)


if __name__ == "__main__":
    asyncio.run(main())

Output:

ReasoningItem(agent=Agent(name='Assistant', instructions='You are a helpful assistant.', handoff_description=None, handoffs=[], model='o4-mini', model_settings=ModelSettings(temperature=None, top_p=None, frequency_penalty=None, presence_penalty=None, tool_choice=None, parallel_tool_calls=None, truncation=None, max_tokens=None, reasoning={'summary': 'auto'}, metadata=None, store=None, include_usage=None, extra_query=None, extra_body=None), tools=[], mcp_servers=[], mcp_config={}, input_guardrails=[], output_guardrails=[], output_type=None, hooks=None, tool_use_behavior='run_llm_again', reset_tool_choice=True), raw_item=ResponseReasoningItem(id='rs_...', summary=[Summary(text='**Determining the Largest City**\n\nI\'m parsing the question about the largest city in the third-largest country. The world\'s largest by area is Russia, followed by Canada, and then the USA. So, the USA is third, and New York City is its largest city by population. I should note, though, that if we\'re thinking by area, Anchorage takes the title. However, the typical interpretation of "largest city" leans towards population, so it\'s almost certainly New York City. Let\'s confirm these details too.', type='summary_text'), Summary(text='**Concluding on NYC**\n\nThe answer to the question is clearly New York City, which is the largest by population in the USA. There’s a possibility that "largest" could refer to area, in which case Sitka, Alaska would be the answer. However, it’s safe to assume the intent is about population. So, I’m confirming that New York City is indeed the largest city in the third-largest country, the United States, by population. Final answer: New York City.', type='summary_text')], type='reasoning', status=None), type='reasoning_item')
...

@leewsimpson
Copy link

Has anyone got this working? I tried the above and I somehow get this response from the agent - "Sorry, I encountered an error while processing your request. 'dict' object has no attribute 'effort'"

Latest SDK from yesterday.

@rm-openai
Copy link
Collaborator

@leewsimpson

  1. What model are you using?
  2. You might need to upgrade the openai SDK - pip install --upgrade openai

@leewsimpson
Copy link

interestingly - I had this slightly different setup that causes this error:

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model=OpenAIChatCompletionsModel(
        openai_client=OpenAI(),
        model="o4-mini"),
    model_settings=ModelSettings(reasoning={"summary": "auto"})
)

@rm-openai
Copy link
Collaborator

@leewsimpson sorry my fault, you actually need

ModelSettings(reasoning=Reasoning(summary="auto"))

In any case it won't work with ChatCompletions because currently reasoning summaries are only supported via the Responses API.

@1164974997
Copy link

@rm-openai
It seems that this code only supports OpenAI’s own models. When I switch the model to deepseek-r1, the output doesn’t contain any reasoning items or summary.

async def main():
    agent = Agent(
        name="Assistant",
        instructions="You are a helpful assistant.",
        model=OpenAIChatCompletionsModel(model="deepseek-r1", openai_client=client),
        model_settings=ModelSettings(reasoning=Reasoning(summary="auto", effort="high")),
    )
    result = await Runner.run(
        agent, input="Largest city in the 3rd largest country in the world?"
    )
    for item in result.new_items:
        print(item)


if __name__ == "__main__":
    asyncio.run(main())

output:

MessageOutputItem(agent=Agent(name='Assistant', instructions='You are a helpful assistant.', handoff_description=None, handoffs=[], model=<agents.models.openai_chatcompletions.OpenAIChatCompletionsModel object at 0x106df47c0>, model_settings=ModelSettings(temperature=None, top_p=None, frequency_penalty=None, presence_penalty=None, tool_choice=None, parallel_tool_calls=None, truncation=None, max_tokens=None, reasoning=Reasoning(effort='high', generate_summary=None, summary='auto'), metadata=None, store=None, include_usage=None, extra_query=None, extra_body=None), tools=[], mcp_servers=[], mcp_config={}, input_guardrails=[], output_guardrails=[], output_type=None, hooks=None, tool_use_behavior='run_llm_again', reset_tool_choice=True), raw_item=ResponseOutputMessage(id='__fake_id__', content=[ResponseOutputText(annotations=[], text='\n\nThe third largest country in the world by total area is the **United States**. The largest city in the United States by population is **New York City**. \n\n**Answer:** New York City.', type='output_text')], role='assistant', status='completed', type='message'), type='message_output_item')

@ShinnosukeUesaka
Copy link
Author

@rm-openai
Thank you for the answer.

One follow up question:
Is the reasoning summary included in the prompt for the following turns?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about using the SDK
Projects
None yet
Development

No branches or pull requests

4 participants