Build RAG applications with your favorite AI framework
LangChain & LlamaIndex VectorStore interfaces with FraiseQL's PostgreSQL GraphQL API
Full VectorStore interface. Drop-in replacement for other vector stores. RetrievalQA chains, semantic search, document loaders.
FraiseQLVectorStore classsimilarity_search() with metadataadd_documents() ingestionas_retriever() for chainspip install fraiseql[langchain]
Full VectorStoreIndex support. Query engines, response synthesizers, node parsers. Enterprise RAG patterns.
FraiseQLVectorStore classVectorStoreIndex integrationas_query_engine() supportpip install fraiseql[llamaindex]
from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains import RetrievalQA
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from fraiseql.integrations.langchain import FraiseQLVectorStore
# Load and split documents
loader = TextLoader("documents/my_file.txt")
docs = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=200
).split_documents(loader.load())
# Initialize vector store
vectorstore = FraiseQLVectorStore(
graphql_url="http://localhost:8000/graphql",
embeddings=OpenAIEmbeddings(),
collection_name="my_documents"
)
# Ingest documents
vectorstore.add_documents(docs)
# Create RAG chain
qa_chain = RetrievalQA.from_chain_type(
llm=ChatOpenAI(temperature=0),
retriever=vectorstore.as_retriever(search_kwargs={"k": 3}),
return_source_documents=True
)
# Ask questions
result = qa_chain({"query": "How do I implement authentication?"})
print(result["result"])
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.node_parser import SentenceSplitter
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from fraiseql.integrations.llamaindex import FraiseQLVectorStore
# Load documents
documents = SimpleDirectoryReader("data").load_data()
nodes = SentenceSplitter(chunk_size=1024).get_nodes_from_documents(documents)
# Initialize vector store
vector_store = FraiseQLVectorStore(
graphql_url="http://localhost:8000/graphql",
embed_model=OpenAIEmbedding(),
collection_name="llamaindex_docs"
)
# Create index
index = VectorStoreIndex(
nodes=nodes,
vector_store=vector_store,
embed_model=OpenAIEmbedding()
)
# Create query engine
query_engine = index.as_query_engine(
llm=OpenAI(model="gpt-4", temperature=0),
similarity_top_k=3
)
# Ask questions
response = query_engine.query("How do I optimize database performance?")
print(response)
# Search with score and metadata filter
results = vectorstore.similarity_search_with_score(
"How do I authenticate users?",
k=10,
filter={
"category": "authentication",
"version": "v2"
}
)
for doc, score in results:
print(f"Score: {score:.3f}")
print(f"Content: {doc.page_content[:100]}...")
from llama_index.core.vector_stores import (
MetadataFilters, ExactMatchFilter
)
query_engine = index.as_query_engine(
similarity_top_k=10,
filters=MetadataFilters(filters=[
ExactMatchFilter(key="category", value="security"),
ExactMatchFilter(key="version", value="v2.0")
])
)
response = query_engine.query("Security best practices?")
Full VectorStore interface for both frameworks. Switch from Pinecone/Weaviate with one line.
Combine semantic search with SQL metadata filtering. PostgreSQL's full power in every query.
pgvector HNSW indexing + Rust-fast GraphQL. No external vector database needed.
No Pinecone/Weaviate bills. Vectors live in the same PostgreSQL as your app data.
Your vectors never leave your infrastructure. Full control over sensitive embeddings.
Query vectors via GraphQL. Same API for your app and AI pipelines.
text-embedding-3, GPT-4
Sentence transformers, local models
Embed API, Claude
Ollama, private deployments
LangChain or LlamaIndex + FraiseQL = AI applications with PostgreSQL reliability
Start Building RAG