Skip to content

input_guardrail is skipped #576

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Salaudev opened this issue Apr 23, 2025 · 6 comments
Open

input_guardrail is skipped #576

Salaudev opened this issue Apr 23, 2025 · 6 comments
Labels
bug Something isn't working

Comments

@Salaudev
Copy link

@input_guardrail
async def rate_info_guardrail(
    ctx: RunContextWrapper[OrchestratorContext],  # your context type
    agent: Agent,
    input: str | list[TResponseInputItem],  # same sig the SDK expects
) -> GuardrailFunctionOutput:
    """
    Abort the run if any critical rate-info fields are missing, or if
    is_bid_request_sent is still False.
    """

    _REQUIRED_RATE_FIELDS = [
        "maximum_rate",
        "minimum_rate",
        "rate_usd",
        "is_bid_request_sent",
    ]

    rate = ctx.context.load_context.rate_info  # <-- your own object

    # 2️⃣  Collect the names of missing / invalid fields.
    missing: list[str] = []
    for name in _REQUIRED_RATE_FIELDS:
        value = getattr(rate, name, None)
        if value is None:  # not set
            missing.append(name)
        elif name == "is_bid_request_sent" and value is False:
            missing.append(name)

    # 3️⃣  Decide whether to trip the wire.
    trip = bool(missing)

    print("Rate info guardrail triggered...")

    # 4️⃣  Return the standard GuardrailFunctionOutput.
    return GuardrailFunctionOutput(
        output_info={"missing_fields": missing},  # great for debugging/logging
        tripwire_triggered=trip,
    )


negotiation_agent = Agent[OrchestratorContext](
    name="Negotiation Agent",
    instructions=dynamic_negotiation_agent_instructions,
    model=OpenAIChatCompletionsModel(
        model=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"), openai_client=azure_client
    ),
    tools=[
        FunctionTool(
            name="send_reply",
            on_invoke_tool=send_reply,
            description="This tool will send the HTML email body to the broker",
            params_json_schema=AgentsReq.model_json_schema(),
        ),
    ],
    input_guardrails=[rate_info_guardrail],
)

In my logs I see that my negotiation_agent is running, but I don't see the print("Rate info guardrail triggered...")

@Salaudev Salaudev added the bug Something isn't working label Apr 23, 2025
@rm-openai
Copy link
Collaborator

@Salaudev is the negotiation_agent the first agent in the run? Only then is the input guadrail called

@Salaudev
Copy link
Author

Salaudev commented Apr 23, 2025

okay, thanks @rm-openai

now I got another issue

i am going to implement it using manager based way, that mean calling agent as a tool

This is my manager agent

# should orchestrator between agents
manager = Agent[OrchestratorContext](
    name="Manager Agent",
    instructions="""
        ## Role ##
        You are a professional dispatcher in a trucking company.
        You are responsible for orchestrating the entire load management
        process from analysis to negotiation and final decision-making.

        Your tasks:
        - Use the extract_details tool to gather all load details and to check missing fields from conversations and what information is missing
        - Use the analyze_warnings tool to identify any compliance issues or concerns
        - Use the answer_question tool to answer any questions the broker may have
        - Ask the broker if we have any missed details
        - Use the communicate_with_broker tool to communicate with the broker
        - Use the cancel_load tool to cancel the load
    """,
    model=OpenAIChatCompletionsModel(
        model="gpt-4o", openai_client=azure_client
    ),
    tools=[
        FunctionTool(
            name="cancel_load",
            on_invoke_tool=cancel_load,
            description="This tool will cancel the load",
            params_json_schema=AgentsReq.model_json_schema(),
        ),
        extractor.as_tool(
            tool_name="extract_details",
            tool_description="This tool will extract the load details from the conversation history",
        ),
        warnings_analyzer.as_tool(
            tool_name="analyze_warnings",
            tool_description="This tool will analyze the load warnings",
        ),
        question_answer.as_tool(
            tool_name="answer_question",
            tool_description="This tool will find the answer to the broker's question using the provided load details and company information",
        ),
        communication_agent.as_tool(
            tool_name="communicate_with_broker",
            tool_description="This tool will send a message to the broker",
        ),
    ],
)

In the logs I see this

Image

Image

We started the conversation and we added this message to the conversation history with role assistant

And when we get reply , agent is runnings and
gives the prev message to into child agents, why it's working like that. how can I avoid that ?

@adhishthite
Copy link

@Salaudev Curious, what tool are you using for viz?

@Salaudev
Copy link
Author

@Salaudev Curious, what tool are you using for viz?

Pydantic logfire

@Salaudev
Copy link
Author

@rm-openai can you help with this please

@rm-openai
Copy link
Collaborator

@Salaudev by default, agent.as_tool has a single input: str that the calling agent has to fill in. You can either give it instructions on what to send (e.g. in the tool description), or create a custom agent-as-tool like this:

https://openai.github.io/openai-agents-python/tools/#customizing-tool-agents

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants