Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home
  • Reference
  • Legacy reference
Ctrl+K
Docs
  • GitHub
  • X / Twitter
Ctrl+K
  • Reference
  • Legacy reference
Docs
  • GitHub
  • X / Twitter

Section Navigation

Base packages

  • Core
  • Langchain
  • Text Splitters
  • Community
  • Experimental

Integrations

  • AI21
  • Airbyte
  • Anthropic
  • AstraDB
  • AWS
  • Azure Dynamic Sessions
  • Box
  • Chroma
  • Cohere
  • Couchbase
  • Elasticsearch
  • Exa
  • Fireworks
  • Google Community
  • Google GenAI
  • Google VertexAI
  • Groq
  • Huggingface
  • Milvus
  • MistralAI
  • MongoDB
  • Nomic
  • Nvidia Ai Endpoints
    • callbacks
      • UsageCallbackHandler
      • get_token_cost_for_model
      • get_usage_callback
      • standardize_model_name
    • chat_models
    • embeddings
    • llm
    • reranking
    • tools
  • Ollama
  • OpenAI
  • Pinecone
  • Postgres
  • Prompty
  • Qdrant
  • Robocorp
  • Together
  • Unstructured
  • VoyageAI
  • Weaviate
  • LangChain Python API Reference
  • langchain-nvidia-ai-endpoints: 0.2.2

callbacks#

Callback Handler that prints to std out.

Classes

callbacks.UsageCallbackHandler()

Callback Handler that tracks OpenAI info.

Functions

callbacks.get_token_cost_for_model(...[, ...])

Get the cost in USD for a given model and number of tokens.

callbacks.get_usage_callback([price_map, ...])

Get the OpenAI callback handler in a context manager.

callbacks.standardize_model_name(model_name)

Standardize the model name to a format that can be used in the OpenAI API.

© Copyright 2023, LangChain Inc.