The Cloud SQL for PostgreSQL for LangChain package provides a first class experience for connecting to Cloud SQL instances from the LangChain ecosystem while providing the following benefits:
- Simplified & Secure Connections: easily and securely create shared connection pools to connect to Google Cloud databases utilizing IAM for authorization and database authentication without needing to manage SSL certificates, configure firewall rules, or enable authorized networks.
- Improved performance & Simplified management: use a single-table schema can lead to faster query execution, especially for large collections.
- Improved metadata handling: store metadata in columns instead of JSON, resulting in significant performance improvements.
- Clear separation: clearly separate table and extension creation, allowing for distinct permissions and streamlined workflows.
In order to use this library, you first need to go through the following steps:
- Select or create a Cloud Platform project.
- Enable billing for your project.
- Enable the Cloud SQL Admin API.
- Setup Authentication.
Install this library in a virtual environment using venv. venv is a tool that creates isolated Python environments. These isolated environments can have separate versions of Python packages, which allows you to isolate one project's dependencies from the dependencies of other projects.
With venv, it's possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.
Python >= 3.9
pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install langchain-google-cloud-sql-pg
pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install langchain-google-cloud-sql-pg
Code samples and snippets live in the samples/ folder.
Use a Vector Store to store embedded data and perform vector search.
from langchain_google_cloud_sql_pg import PostgresVectorstore, PostgresEngine
from langchain.embeddings import VertexAIEmbeddings
engine = PostgresEngine.from_instance("project-id", "region", "my-instance", "my-database")
engine.init_vectorstore_table(
table_name="my-table",
vector_size=768, # Vector size for `VertexAIEmbeddings()`
)
embeddings_service = VertexAIEmbeddings(model_name="textembedding-gecko@003")
vectorstore = PostgresVectorStore.create_sync(
engine,
table_name="my-table",
embeddings=embedding_service
)
See the full Vector Store tutorial.
Use a document loader to load data as Documents.
from langchain_google_cloud_sql_pg import PostgresEngine, PostgresLoader
engine = PostgresEngine.from_instance("project-id", "region", "my-instance", "my-database")
loader = PostgresSQLLoader.create_sync(
engine,
table_name="my-table-name"
)
docs = loader.lazy_load()
See the full Document Loader tutorial.
Use Chat Message History to store messages and provide conversation history to LLMs.
from langchain_google_cloud_sql_pg import PostgresChatMessageHistory, PostgresEngine
engine = PostgresEngine.from_instance("project-id", "region", "my-instance", "my-database")
engine.init_chat_history_table(table_name="my-message-store")
history = PostgresChatMessageHistory.create_sync(
engine,
table_name="my-message-store",
session_id="my-session_id"
)
See the full Chat Message History tutorial.
Use PostgresSaver
to save snapshots of the graph state at a given point in time.
from langchain_google_cloud_sql_pg import PostgresSaver, PostgresEngine
engine = PostgresEngine.from_instance("project-id", "region", "my-instance", "my-database")
checkpoint = PostgresSaver.create_sync(engine)
See the full Checkpoint tutorial.
Code examples can be found in the samples/ folder.
Async functionality improves the speed and efficiency of database connections through concurrency, which is key for providing enterprise quality performance and scaling in GenAI applications. This package uses a native async Postgres driver, asyncpg, to optimize Python's async functionality.
LangChain supports async programming, since LLM based application utilize many I/O-bound operations, such as making API calls to language models, databases, or other services. All components should provide both async and sync versions of all methods.
asyncio is a Python library used for concurrent programming and is used as the foundation for multiple Python asynchronous frameworks. asyncio uses async / await syntax to achieve concurrency for non-blocking I/O-bound tasks using one thread with cooperative multitasking instead of multi-threading.
Update sync methods to await async methods
engine = await PostgresEngine.afrom_instance("project-id", "region", "my-instance", "my-database")
await engine.ainit_vectorstore_table(table_name="my-table", vector_size=768)
vectorstore = await PostgresVectorStore.create(
engine,
table_name="my-table",
embedding_service=VertexAIEmbeddings(model_name="textembedding-gecko@003")
)
ipython and jupyter notebooks support the use of the await keyword without any additional setup
Update routes to use async def.
@app.get("/invoke/")
async def invoke(query: str):
return await retriever.ainvoke(query)
It is recommend to create a top-level async method definition: async def to wrap multiple async methods. Then use asyncio.run() to run the the top-level entrypoint, e.g. "main()"
async def main():
response = await retriever.ainvoke(query)
print(response)
asyncio.run(main())
Contributions to this library are always welcome and highly encouraged.
See CONTRIBUTING for more information how to get started.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Code of Conduct for more information.
Apache 2.0 - See LICENSE for more information.
This is not an officially supported Google product.