🦜🦙 AI Framework Integration

Build RAG applications with your favorite AI framework

LangChain & LlamaIndex VectorStore interfaces with FraiseQL's PostgreSQL GraphQL API

Choose Your Framework

🦜

LangChain

Full VectorStore interface. Drop-in replacement for other vector stores. RetrievalQA chains, semantic search, document loaders.

  • FraiseQLVectorStore class
  • similarity_search() with metadata
  • add_documents() ingestion
  • as_retriever() for chains
pip install fraiseql[langchain]
🦙

LlamaIndex

Full VectorStoreIndex support. Query engines, response synthesizers, node parsers. Enterprise RAG patterns.

  • FraiseQLVectorStore class
  • VectorStoreIndex integration
  • as_query_engine() support
  • ✅ MetadataFilters for hybrid search
pip install fraiseql[llamaindex]

🦜 LangChain RAG Pipeline

Document Ingestion + RAG with LangChain
from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains import RetrievalQA
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from fraiseql.integrations.langchain import FraiseQLVectorStore

# Load and split documents
loader = TextLoader("documents/my_file.txt")
docs = RecursiveCharacterTextSplitter(
    chunk_size=1000, chunk_overlap=200
).split_documents(loader.load())

# Initialize vector store
vectorstore = FraiseQLVectorStore(
    graphql_url="http://localhost:8000/graphql",
    embeddings=OpenAIEmbeddings(),
    collection_name="my_documents"
)

# Ingest documents
vectorstore.add_documents(docs)

# Create RAG chain
qa_chain = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(temperature=0),
    retriever=vectorstore.as_retriever(search_kwargs={"k": 3}),
    return_source_documents=True
)

# Ask questions
result = qa_chain({"query": "How do I implement authentication?"})
print(result["result"])

🦙 LlamaIndex RAG Pipeline

Document Indexing + Query Engine with LlamaIndex
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.node_parser import SentenceSplitter
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from fraiseql.integrations.llamaindex import FraiseQLVectorStore

# Load documents
documents = SimpleDirectoryReader("data").load_data()
nodes = SentenceSplitter(chunk_size=1024).get_nodes_from_documents(documents)

# Initialize vector store
vector_store = FraiseQLVectorStore(
    graphql_url="http://localhost:8000/graphql",
    embed_model=OpenAIEmbedding(),
    collection_name="llamaindex_docs"
)

# Create index
index = VectorStoreIndex(
    nodes=nodes,
    vector_store=vector_store,
    embed_model=OpenAIEmbedding()
)

# Create query engine
query_engine = index.as_query_engine(
    llm=OpenAI(model="gpt-4", temperature=0),
    similarity_top_k=3
)

# Ask questions
response = query_engine.query("How do I optimize database performance?")
print(response)

Hybrid Search with Metadata Filtering

LangChain Hybrid Search
# Search with score and metadata filter
results = vectorstore.similarity_search_with_score(
    "How do I authenticate users?",
    k=10,
    filter={
        "category": "authentication",
        "version": "v2"
    }
)

for doc, score in results:
    print(f"Score: {score:.3f}")
    print(f"Content: {doc.page_content[:100]}...")
LlamaIndex Hybrid Search
from llama_index.core.vector_stores import (
    MetadataFilters, ExactMatchFilter
)

query_engine = index.as_query_engine(
    similarity_top_k=10,
    filters=MetadataFilters(filters=[
        ExactMatchFilter(key="category", value="security"),
        ExactMatchFilter(key="version", value="v2.0")
    ])
)

response = query_engine.query("Security best practices?")

Why FraiseQL for RAG?

🔄

Drop-in Replacement

Full VectorStore interface for both frameworks. Switch from Pinecone/Weaviate with one line.

🎯

True Hybrid Search

Combine semantic search with SQL metadata filtering. PostgreSQL's full power in every query.

PostgreSQL Performance

pgvector HNSW indexing + Rust-fast GraphQL. No external vector database needed.

💰

Cost Efficient

No Pinecone/Weaviate bills. Vectors live in the same PostgreSQL as your app data.

🔐

Data Sovereignty

Your vectors never leave your infrastructure. Full control over sensitive embeddings.

🛠️

GraphQL API

Query vectors via GraphQL. Same API for your app and AI pipelines.

Model Agnostic Architecture

🟣

OpenAI

text-embedding-3, GPT-4

🤗

Hugging Face

Sentence transformers, local models

🤖

Cohere / Anthropic

Embed API, Claude

🏠

Local Models

Ollama, private deployments

Build Production RAG Today

LangChain or LlamaIndex + FraiseQL = AI applications with PostgreSQL reliability

Start Building RAG