Hi, I am new to Nvidia API and confused how it calls OpenAI API which I thought is meant for GPT but not models like Llama? can someone explain how this works?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Result of nvidia nims in openai SDK and API inconsistent | 0 | 29 | January 7, 2025 | |
Nvidia / llama-3.1-nemotron-70b-instruct openai api is not working | 1 | 274 | November 10, 2024 | |
How can we bring VLM of choice? | 2 | 93 | August 23, 2024 | |
Hope, dream | 0 | 202 | February 29, 2024 | |
Not getting response for this model since yesterday meta/llama-3.1-405b-instruct model | 0 | 61 | December 13, 2024 | |
LLM model endpoints data residency | 0 | 80 | July 12, 2024 | |
Llama-3.2-nv-embedqa-1b-v2 402 Payment required | 1 | 28 | June 10, 2025 | |
The model llama3 does not exist calling from ChatNVIDIA langchain class | 2 | 543 | May 6, 2024 | |
Not generating all embeddings? | 0 | 193 | April 5, 2024 | |
Open AI API Compatible | 0 | 105 | March 13, 2025 |