-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Insights: openai/openai-agents-python
Overview
Could not load contribution data
Please try again later
14 Pull requests merged by 9 people
-
docs: add FutureAGI to tracing documentation
#592 merged
Apr 25, 2025 -
Make the TTS voices type exportable
#577 merged
Apr 24, 2025 -
Add usage to context in streaming
#595 merged
Apr 24, 2025 -
v0.0.13
#593 merged
Apr 24, 2025 -
More tests for cancelling streamed run
#590 merged
Apr 24, 2025 -
Fix stream error using LiteLLM
#589 merged
Apr 24, 2025 -
Prevent MCP ClientSession hang
#580 merged
Apr 24, 2025 -
Create to_json_dict for ModelSettings
#582 merged
Apr 24, 2025 -
Allow cancel out of the streaming result
#579 merged
Apr 23, 2025 -
Examples: Fix financial_research_agent instructions
#573 merged
Apr 23, 2025 -
Adding extra_headers parameters to ModelSettings
#550 merged
Apr 23, 2025 -
v0.0.12
#564 merged
Apr 22, 2025 -
Pass through organization/project headers to tracing backend, fix speech_group enum
#562 merged
Apr 21, 2025 -
Docs and tests for litellm
#561 merged
Apr 21, 2025
7 Pull requests opened by 6 people
-
Add File Loading Utilities for Agent Instructions
#565 opened
Apr 22, 2025 -
Make input/new items available in the run context
#572 opened
Apr 22, 2025 -
Add a new GH Actions job to automatically update translated document pagse
#598 opened
Apr 24, 2025 -
Update README.md
#606 opened
Apr 26, 2025 -
Remove vscode settings from the project
#607 opened
Apr 26, 2025 -
Add ProviderError exception and handle missing LLM response
#609 opened
Apr 26, 2025 -
Fix spacing inconsistencies and adhere to style guidelines
#612 opened
Apr 27, 2025
28 Issues closed by 14 people
-
how to print Mcp tools print
#615 closed
Apr 28, 2025 -
Is RunContext thread safe?
#537 closed
Apr 28, 2025 -
agent output format and handoff stability
#568 closed
Apr 27, 2025 -
Optimize Latency for Parallel Agent Runs with Streaming
#498 closed
Apr 27, 2025 -
How would you handle Pydantic output_type validation and retries?
#530 closed
Apr 27, 2025 -
handoff() returned Handoff objects aren’t recognized by Agent(handoffs=…) without manual .name alias
#599 closed
Apr 27, 2025 -
Error in Stream in Runner.run_streamed() with LitellmModel(Model) class
#601 closed
Apr 26, 2025 -
Support AWS Bedrock
#86 closed
Apr 26, 2025 -
Issue when processing real time audio from a Twilio media stream
#304 closed
Apr 26, 2025 -
what is the best practices to have faster voices conversation?
#306 closed
Apr 26, 2025 -
AttributeError: module 'planner_agent' has no attribute 'handoffs'
#596 closed
Apr 25, 2025 -
It seems I'm unable to access the file `sample.txt`
#584 closed
Apr 25, 2025 -
Optional Fields in Agent Output Cause JSON Schema Error with AWS Bedrock
#586 closed
Apr 25, 2025 -
is there a way to block the handoff to an agent based on a custom logic ?
#585 closed
Apr 25, 2025 -
'AgentOutputSchema' object has no attribute '__mro__'
#597 closed
Apr 25, 2025 -
How can I pass dynamic instruction to the agent
#482 closed
Apr 25, 2025 -
context.usage returns 0 in streaming mode
#594 closed
Apr 24, 2025 -
LiteLLM extension crashes with run_streamed
#587 closed
Apr 24, 2025 -
Returning function call's responses in `raw_response_event`
#328 closed
Apr 24, 2025 -
Canceling the stream from result.stream_events()
#574 closed
Apr 23, 2025 -
When will the .NET version be available?
#571 closed
Apr 23, 2025 -
How to make hand-off decisions more reliable?
#541 closed
Apr 23, 2025 -
Nested objects in function tool input are empty
#563 closed
Apr 22, 2025 -
openai-agents package ot installing correctly. "ModuleNotFound agents"
#399 closed
Apr 22, 2025 -
Missing packages on install for voice with macOS Intel
#478 closed
Apr 22, 2025 -
Adding Langsmith trace processor introduces huge latency to chat
#529 closed
Apr 22, 2025
22 Issues opened by 21 people
-
from agents.extensions.models.litellm_model import LitellmModel
#621 opened
Apr 28, 2025 -
AWS Bedrock via LiteLLM
#620 opened
Apr 28, 2025 -
https://static.hotmart.com/checkout/widget.min.js
#619 opened
Apr 28, 2025 -
Resource tracker warning (leaked semaphores) with MCPServerStdio
#618 opened
Apr 28, 2025 -
Handoff Agent and Tool Call Not Triggering Reliably in Multi-Agent Setup
#617 opened
Apr 28, 2025 -
Add HTTP (non-stdio) MCP server support to Agents SDK
#616 opened
Apr 27, 2025 -
[Bug]: ModuleNotFoundError: No module named 'enterprise' When Using litellm==1.48.1 in Google Colab
#614 opened
Apr 27, 2025 -
ModuleNotFoundError: No module named 'enterprise' #10353
#613 opened
Apr 27, 2025 -
Bug: style guideline and formatting inconsistencies
#611 opened
Apr 27, 2025 -
[Bug]: UnicodeDecodeError when importing litellm_model on Windows
#610 opened
Apr 26, 2025 -
additionalProperties should not be set for object types
#608 opened
Apr 26, 2025 -
[Bug]: SDK crashes when `choices` is `None` (provider-error payload)
#604 opened
Apr 25, 2025 -
Integration of deterministic conversations and other agents
#603 opened
Apr 25, 2025 -
Add HTTP Streamable support for MCP's
#600 opened
Apr 25, 2025 -
openai-agents-dotnet
#588 opened
Apr 24, 2025 -
Ordering of events in Runner.run_streamed is incorrect
#583 opened
Apr 24, 2025 -
input_guardrail is skipped
#576 opened
Apr 23, 2025 -
Triage agent can not delegate task to handoff agent
#575 opened
Apr 23, 2025 -
bugs in run.py
#570 opened
Apr 22, 2025 -
Reasoning model items provide to General model
#569 opened
Apr 22, 2025 -
How to use llm outputs in the on_handoff function
#567 opened
Apr 22, 2025
18 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
add reasoning content to ChatCompletions
#494 commented on
Apr 25, 2025 • 7 new comments -
Refine return types for OnHandoff callbacks to improve type safety
#531 commented on
Apr 27, 2025 • 0 new comments -
[Visualization Extension] Enhance agent graph visualization to prevent infinite recursion, add mermaid graph
#445 commented on
Apr 25, 2025 • 0 new comments -
Fact checking guardrails
#347 commented on
Apr 23, 2025 • 0 new comments -
Add tool call parameters for `on_tool_start` hook
#253 commented on
Apr 26, 2025 • 0 new comments -
Accessing reasoning tokens of another llm model in agents sdk
#462 commented on
Apr 27, 2025 • 0 new comments -
'AsyncOpenAI' Object Has No Attribute 'responses' in Async Runner & 'this event loop is already running' in Sync Runner
#542 commented on
Apr 27, 2025 • 0 new comments -
What is the role of ReasoningItem
#480 commented on
Apr 26, 2025 • 0 new comments -
Enhance `on_tool_start` Hook to Include Tool Call Arguments
#252 commented on
Apr 26, 2025 • 0 new comments -
human-in-the-loop
#378 commented on
Apr 26, 2025 • 0 new comments -
How to make the conversation finally back to the MAIN AGENT
#527 commented on
Apr 25, 2025 • 0 new comments -
History Cleaning
#545 commented on
Apr 25, 2025 • 0 new comments -
## Custom Model Provider Not Working
#485 commented on
Apr 25, 2025 • 0 new comments -
Support for OpenAI agents sdk with Javascript/Typescript
#240 commented on
Apr 23, 2025 • 0 new comments -
how to use Code Interpreter or Image Output in OpenAI Agents SDK
#360 commented on
Apr 23, 2025 • 0 new comments -
Websocket streaming audio in realtime from client
#536 commented on
Apr 23, 2025 • 0 new comments -
function call can not get call_id
#559 commented on
Apr 22, 2025 • 0 new comments -
Add `reasoning_content` to ChatCompletions
#415 commented on
Apr 22, 2025 • 0 new comments