Skip to content

Conversation

@bizob2828
Copy link
Member

Description

Please provide a brief description of the changes introduced in this pull request.
What problem does it solve? What is the context of this change?

How to Test

Please describe how you have tested these changes. Have you run the code against an example application?
What steps did you take to ensure that the changes are working correctly?

Related Issues

Please include any related issues or pull requests in this section, using the format Closes #<issue number> or Fixes #<issue number> if applicable.

@codecov
Copy link

codecov bot commented Nov 6, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 97.64%. Comparing base (b1687f6) to head (a7ff18c).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3494      +/-   ##
==========================================
- Coverage   97.76%   97.64%   -0.13%     
==========================================
  Files         420      420              
  Lines       55719    55728       +9     
  Branches        1        1              
==========================================
- Hits        54476    54416      -60     
- Misses       1243     1312      +69     
Flag Coverage Δ
integration-tests-cjs-20.x 73.91% <0.00%> (-0.02%) ⬇️
integration-tests-cjs-22.x 73.95% <0.00%> (-0.02%) ⬇️
integration-tests-cjs-24.x 74.71% <0.00%> (-0.02%) ⬇️
integration-tests-esm-20.x 52.50% <0.00%> (?)
integration-tests-esm-22.x 52.55% <0.00%> (-0.01%) ⬇️
integration-tests-esm-24.x 53.75% <0.00%> (-0.01%) ⬇️
unit-tests-20.x 88.68% <0.00%> (-0.02%) ⬇️
unit-tests-22.x 88.71% <0.00%> (-0.02%) ⬇️
unit-tests-24.x 88.72% <0.00%> (-0.02%) ⬇️
versioned-tests-20.x 81.01% <100.00%> (-0.29%) ⬇️
versioned-tests-22.x 81.01% <100.00%> (-0.30%) ⬇️
versioned-tests-24.x 80.94% <100.00%> (-0.31%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@bizob2828 bizob2828 force-pushed the add-token-counts-openai-genai branch 2 times, most recently from 78a1528 to 9145bd0 Compare November 6, 2025 21:39
@bizob2828 bizob2828 force-pushed the add-token-counts-openai-genai branch from 9145bd0 to a7ff18c Compare November 11, 2025 14:55
@bizob2828 bizob2828 changed the title Add token counts OpenAI genai fix: Updated openai instrumentation to properly handle streaming when stream_options.include_usage is set Nov 11, 2025
@bizob2828 bizob2828 marked this pull request as ready for review November 11, 2025 14:55
@bizob2828 bizob2828 requested review from amychisholm03 and svetlanabrennan and removed request for svetlanabrennan November 11, 2025 15:06
test('does not calculate tokens when no content exists', (t, end) => {
const { agent } = t.nr
const req = {
model: 'gpt-3.5-turbo-0613'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A nit, but I feel like we should store these model ids off as constants at the top of the file

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can but rather do this as a follow up. These 3 PRs around tokens fix bugs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, just wanted to point it out

input: content,
model: 'gpt-4',
stream_options: { include_usage: true }
model: 'gpt-4'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following up with the previous comment, are these tests dependent on the GPT version (if I remember correctly, I think not)? If not, the mocks and tests should just support 1 model id for simplicity

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the model is not taken into account from the mock server. places where the model is stored as a variable is for assertions in the LlmCompletion* events

Copy link
Contributor

@amychisholm03 amychisholm03 Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, then we could just standardize them e.g. replacing 3.5 with 4 for consistency. It might make it easier to reason about when we look at it in the future, so we don't think it's model-specific. Like you said, this can be done in a separate PR.

@amychisholm03 amychisholm03 self-assigned this Nov 11, 2025
@bizob2828 bizob2828 merged commit 37c43f5 into newrelic:main Nov 11, 2025
43 of 45 checks passed
@bizob2828 bizob2828 deleted the add-token-counts-openai-genai branch November 11, 2025 18:02
@github-project-automation github-project-automation bot moved this from Needs PR Review to Done: Issues recently completed in Node.js Engineering Board Nov 11, 2025
@github-actions github-actions bot mentioned this pull request Nov 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done: Issues recently completed

Development

Successfully merging this pull request may close these issues.

2 participants