-
Notifications
You must be signed in to change notification settings - Fork 421
fix: Updated openai instrumentation to properly handle streaming when stream_options.include_usage is set
#3494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Updated openai instrumentation to properly handle streaming when stream_options.include_usage is set
#3494
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3494 +/- ##
==========================================
- Coverage 97.76% 97.64% -0.13%
==========================================
Files 420 420
Lines 55719 55728 +9
Branches 1 1
==========================================
- Hits 54476 54416 -60
- Misses 1243 1312 +69
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
78a1528 to
9145bd0
Compare
`stream_options.include_usage` is set
9145bd0 to
a7ff18c
Compare
stream_options.include_usage is set
| test('does not calculate tokens when no content exists', (t, end) => { | ||
| const { agent } = t.nr | ||
| const req = { | ||
| model: 'gpt-3.5-turbo-0613' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A nit, but I feel like we should store these model ids off as constants at the top of the file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can but rather do this as a follow up. These 3 PRs around tokens fix bugs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, just wanted to point it out
| input: content, | ||
| model: 'gpt-4', | ||
| stream_options: { include_usage: true } | ||
| model: 'gpt-4' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following up with the previous comment, are these tests dependent on the GPT version (if I remember correctly, I think not)? If not, the mocks and tests should just support 1 model id for simplicity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the model is not taken into account from the mock server. places where the model is stored as a variable is for assertions in the LlmCompletion* events
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, then we could just standardize them e.g. replacing 3.5 with 4 for consistency. It might make it easier to reason about when we look at it in the future, so we don't think it's model-specific. Like you said, this can be done in a separate PR.
Description
Please provide a brief description of the changes introduced in this pull request.
What problem does it solve? What is the context of this change?
How to Test
Please describe how you have tested these changes. Have you run the code against an example application?
What steps did you take to ensure that the changes are working correctly?
Related Issues
Please include any related issues or pull requests in this section, using the format
Closes #<issue number>orFixes #<issue number>if applicable.