Astra DB
This page provides a quickstart for using Astra DB as a Vector Store.
DataStax Astra DB is a serverless vector-capable database built on Apache Cassandraยฎ and made conveniently available through an easy-to-use JSON API.
You'll need to install langchain-community
with pip install -qU langchain-community
to use this integration
Note: in addition to access to the database, an OpenAI API Key is required to run the full example.
Setup and general dependenciesโ
Use of the integration requires the corresponding Python package:
pip install --upgrade langchain-astradb
Note. the following are all packages required to run the full demo on this page. Depending on your LangChain setup, some of them may need to be installed:
pip install langchain langchain-openai datasets pypdf
Import dependenciesโ
import os
from getpass import getpass
from datasets import (
load_dataset,
)
from langchain_community.document_loaders import PyPDFLoader
from langchain_core.documents import Document
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
API Reference:
os.environ["OPENAI_API_KEY"] = getpass("OPENAI_API_KEY = ")
embe = OpenAIEmbeddings()
Import the Vector Storeโ
from langchain_astradb import AstraDBVectorStore
API Reference:
Connection parametersโ
These are found on your Astra DB dashboard:
- the API Endpoint looks like
https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com
- the Token looks like
AstraCS:6gBhNmsk135....
- you may optionally provide a Namespace such as
my_namespace
ASTRA_DB_API_ENDPOINT = input("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass("ASTRA_DB_APPLICATION_TOKEN = ")
desired_namespace = input("(optional) Namespace = ")
if desired_namespace:
ASTRA_DB_KEYSPACE = desired_namespace
else:
ASTRA_DB_KEYSPACE = None
Now you can create the vector store:
vstore = AstraDBVectorStore(
embedding=embe,
collection_name="astra_vector_demo",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
namespace=ASTRA_DB_KEYSPACE,
)
Load a datasetโ
Convert each entry in the source dataset into a Document
, then write them into the vector store:
philo_dataset = load_dataset("datastax/philosopher-quotes")["train"]
docs = []
for entry in philo_dataset:
metadata = {"author": entry["author"]}
doc = Document(page_content=entry["quote"], metadata=metadata)
docs.append(doc)
inserted_ids = vstore.add_documents(docs)
print(f"\nInserted {len(inserted_ids)} documents.")
In the above, metadata
dictionaries are created from the source data and are part of the Document
.
Note: check the Astra DB API Docs for the valid metadata field names: some characters are reserved and cannot be used.
Add some more entries, this time with add_texts
:
texts = ["I think, therefore I am.", "To the things themselves!"]
metadatas = [{"author": "descartes"}, {"author": "husserl"}]
ids = ["desc_01", "huss_xy"]
inserted_ids_2 = vstore.add_texts(texts=texts, metadatas=metadatas, ids=ids)
print(f"\nInserted {len(inserted_ids_2)} documents.")
Note: you may want to speed up the execution of add_texts
and add_documents
by increasing the concurrency level for
these bulk operations - check out the *_concurrency
parameters in the class constructor and the add_texts
docstrings
for more details. Depending on the network and the client machine specifications, your best-performing choice of parameters may vary.
Run searchesโ
This section demonstrates metadata filtering and getting the similarity scores back:
results = vstore.similarity_search("Our life is what we make of it", k=3)
for res in results:
print(f"* {res.page_content} [{res.metadata}]")
results_filtered = vstore.similarity_search(
"Our life is what we make of it",
k=3,
filter={"author": "plato"},
)
for res in results_filtered:
print(f"* {res.page_content} [{res.metadata}]")
results = vstore.similarity_search_with_score("Our life is what we make of it", k=3)
for res, score in results:
print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
MMR (Maximal-marginal-relevance) searchโ
results = vstore.max_marginal_relevance_search(
"Our life is what we make of it",
k=3,
filter={"author": "aristotle"},
)
for res in results:
print(f"* {res.page_content} [{res.metadata}]")
Asyncโ
Note that the Astra DB vector store supports all fully async methods (asimilarity_search
, afrom_texts
, adelete
and so on) natively, i.e. without thread wrapping involved.
Deleting stored documentsโ
delete_1 = vstore.delete(inserted_ids[:3])
print(f"all_succeed={delete_1}") # True, all documents deleted
delete_2 = vstore.delete(inserted_ids[2:5])
print(f"some_succeeds={delete_2}") # True, though some IDs were gone already
A minimal RAG chainโ
The next cells will implement a simple RAG pipeline:
- download a sample PDF file and load it onto the store;
- create a RAG chain with LCEL (LangChain Expression Language), with the vector store at its heart;
- run the question-answering chain.
!curl -L \
"https://github.com/awesome-astra/datasets/blob/main/demo-resources/what-is-philosophy/what-is-philosophy.pdf?raw=true" \
-o "what-is-philosophy.pdf"
pdf_loader = PyPDFLoader("what-is-philosophy.pdf")
splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=64)
docs_from_pdf = pdf_loader.load_and_split(text_splitter=splitter)
print(f"Documents from PDF: {len(docs_from_pdf)}.")
inserted_ids_from_pdf = vstore.add_documents(docs_from_pdf)
print(f"Inserted {len(inserted_ids_from_pdf)} documents.")
retriever = vstore.as_retriever(search_kwargs={"k": 3})
philo_template = """
You are a philosopher that draws inspiration from great thinkers of the past
to craft well-thought answers to user questions. Use the provided context as the basis
for your answers and do not make up new reasoning paths - just mix-and-match what you are given.
Your answers must be concise and to the point, and refrain from answering about other topics than philosophy.
CONTEXT:
{context}
QUESTION: {question}
YOUR ANSWER:"""
philo_prompt = ChatPromptTemplate.from_template(philo_template)
llm = ChatOpenAI()
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| philo_prompt
| llm
| StrOutputParser()
)
chain.invoke("How does Russel elaborate on Peirce's idea of the security blanket?")
For more, check out a complete RAG template using Astra DB here.
Cleanupโ
If you want to completely delete the collection from your Astra DB instance, run this.
(You will lose the data you stored in it.)
vstore.delete_collection()