Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home
  • Reference
  • Legacy reference
Ctrl+K
Docs
  • GitHub
  • X / Twitter
Ctrl+K
  • Reference
  • Legacy reference
Docs
  • GitHub
  • X / Twitter

Section Navigation

Base packages

  • Core
  • Langchain
  • Text Splitters
  • Community
    • adapters
    • agent_toolkits
    • agents
    • cache
    • callbacks
    • chains
    • chat_loaders
    • chat_message_histories
    • chat_models
    • cross_encoders
    • docstore
    • document_compressors
    • document_loaders
    • document_transformers
    • embeddings
    • example_selectors
    • graph_vectorstores
    • graphs
    • indexes
    • llms
      • AI21
      • AI21PenaltyData
      • AlephAlpha
      • AmazonAPIGateway
      • ContentHandlerAmazonAPIGateway
      • Anyscale
      • Aphrodite
      • Arcee
      • Aviary
      • AviaryBackend
      • AzureMLBaseEndpoint
      • AzureMLEndpointApiType
      • AzureMLEndpointClient
      • AzureMLOnlineEndpoint
      • ContentFormatterBase
      • CustomOpenAIContentFormatter
      • DollyContentFormatter
      • GPT2ContentFormatter
      • HFContentFormatter
      • LlamaContentFormatter
      • OSSContentFormatter
      • BaichuanLLM
      • QianfanLLMEndpoint
      • Banana
      • Baseten
      • Beam
      • BedrockBase
      • LLMInputOutputAdapter
      • BigdlLLM
      • NIBittensorLLM
      • CerebriumAI
      • ChatGLM
      • ChatGLM3
      • Clarifai
      • CloudflareWorkersAI
      • CTransformers
      • CTranslate2
      • Databricks
      • DeepInfra
      • DeepSparse
      • EdenAI
      • ExLlamaV2
      • FakeListLLM
      • FakeStreamingListLLM
      • ForefrontAI
      • BaseFriendli
      • Friendli
      • GigaChat
      • GooseAI
      • GPT4All
      • GradientLLM
      • TrainResult
      • HumanInputLLM
      • IpexLLM
      • JavelinAIGateway
      • Params
      • KoboldApiLLM
      • Konko
      • LayerupSecurity
      • LlamaCpp
      • Llamafile
      • ManifestWrapper
      • Minimax
      • MinimaxCommon
      • Mlflow
      • MlflowAIGateway
      • Params
      • MLXPipeline
      • Modal
      • Moonshot
      • MoonshotCommon
      • MosaicML
      • NLPCloud
      • OCIModelDeploymentLLM
      • OCIModelDeploymentTGI
      • OCIModelDeploymentVLLM
      • CohereProvider
      • MetaProvider
      • OCIAuthType
      • OCIGenAI
      • OCIGenAIBase
      • Provider
      • OctoAIEndpoint
      • Ollama
      • OllamaEndpointNotFoundError
      • OpaquePrompts
      • BaseOpenAI
      • IdentifyingParams
      • OpenLLM
      • OpenLM
      • PaiEasEndpoint
      • Petals
      • PipelineAI
      • Predibase
      • PredictionGuard
      • PromptLayerOpenAI
      • PromptLayerOpenAIChat
      • Replicate
      • RWKV
      • ContentHandlerBase
      • LLMContentHandler
      • LineIterator
      • SagemakerEndpoint
      • SSEndpointHandler
      • SVEndpointHandler
      • SambaStudio
      • Sambaverse
      • SelfHostedPipeline
      • SelfHostedHuggingFaceLLM
      • Solar
      • SolarCommon
      • SparkLLM
      • StochasticAI
      • Nebula
      • TextGen
      • Device
      • ReaderConfig
      • TitanTakeoff
      • Tongyi
      • VLLM
      • VLLMOpenAI
      • VolcEngineMaasBase
      • VolcEngineMaasLLM
      • WeightOnlyQuantPipeline
      • Writer
      • Xinference
      • YandexGPT
      • YiLLM
      • You
      • Yuan2
      • create_llm_result
      • update_token_usage
      • get_completions
      • get_models
      • acompletion_with_retry
      • completion_with_retry
      • get_default_api_token
      • get_default_host
      • get_repl_context
      • acompletion_with_retry
      • acompletion_with_retry_batching
      • acompletion_with_retry_streaming
      • completion_with_retry
      • completion_with_retry_batching
      • conditional_decorator
      • completion_with_retry
      • clean_url
      • default_guardrail_violation_handler
      • load_llm
      • load_llm_from_config
      • acompletion_with_retry
      • completion_with_retry
      • update_token_usage
      • completion_with_retry
      • make_request
      • agenerate_with_last_element_mark
      • astream_generate_with_retry
      • check_response
      • generate_with_last_element_mark
      • generate_with_retry
      • stream_generate_with_retry
      • enforce_stop_tokens
      • acompletion_with_retry
      • completion_with_retry
      • is_codey_model
      • is_gemini_model
      • acompletion_with_retry
      • completion_with_retry
      • Anthropic
      • Bedrock
      • BaseCohere
      • Cohere
      • Fireworks
      • GooglePalm
      • HuggingFaceEndpoint
      • HuggingFaceHub
      • HuggingFacePipeline
      • HuggingFaceTextGenInference
      • AzureOpenAI
      • OpenAI
      • OpenAIChat
      • Together
      • VertexAI
      • VertexAIModelGarden
      • WatsonxLLM
    • memory
    • output_parsers
    • query_constructors
    • retrievers
    • storage
    • tools
    • utilities
    • utils
    • vectorstores
  • Experimental

Integrations

  • AI21
  • Airbyte
  • Anthropic
  • AstraDB
  • AWS
  • Azure Dynamic Sessions
  • Box
  • Chroma
  • Cohere
  • Couchbase
  • Elasticsearch
  • Exa
  • Fireworks
  • Google Community
  • Google GenAI
  • Google VertexAI
  • Groq
  • Huggingface
  • Milvus
  • MistralAI
  • MongoDB
  • Nomic
  • Nvidia Ai Endpoints
  • Ollama
  • OpenAI
  • Pinecone
  • Postgres
  • Prompty
  • Qdrant
  • Robocorp
  • Together
  • Unstructured
  • VoyageAI
  • Weaviate
  • LangChain Python API Reference
  • llms
  • completion_w...

completion_with_retry#

langchain_community.llms.openai.completion_with_retry(llm: BaseOpenAI | OpenAIChat, run_manager: CallbackManagerForLLMRun | None = None, **kwargs: Any) → Any[source]#

Use tenacity to retry the completion call.

Parameters:
  • llm (BaseOpenAI | OpenAIChat) –

  • run_manager (CallbackManagerForLLMRun | None) –

  • kwargs (Any) –

Return type:

Any

On this page
  • completion_with_retry()

© Copyright 2023, LangChain Inc.