Skip to content

Multi-Database Federation Configuration

Multi-database federation enables FraiseQL to query, join, and aggregate data across PostgreSQL, MySQL, SQLite, and SQL Server instances simultaneously. This guide covers configuration, schema mapping, performance tuning, and production best practices.

Federation architecture:

┌─────────────────────────────────────────┐
│ FraiseQL GraphQL API Layer │
│ (Query Planning & Execution) │
└────────┬────────────────┬────────────────┘
│ │
┌────▼─────┐ ┌─────▼────┐
│ Users DB │ │ Orders DB │
│PostgreSQL│ │ MySQL │
└──────────┘ └───────────┘
▲ ▲
│ Direct SQL │ Direct SQL
│ Execution │ Execution

Why federate?

  • Microservices: Independently deployed databases
  • Legacy systems: Integrate without refactoring
  • Vendor lock-in: Use best database for each use case
  • Multi-region: Query across geographic regions
  • Performance: Parallelize queries across databases

Create fraiseql.toml with multiple database definitions:

fraiseql.toml
# Primary database (default for all queries)
[databases.default]
driver = "postgresql"
host = "localhost"
port = 5432
database = "fraiseql"
username = "${DB_USERNAME}"
password = "${DB_PASSWORD}"
pool_size = 10
# Users database (separate PostgreSQL instance)
[databases.users_db]
driver = "postgresql"
host = "users.internal.example.com"
port = 5432
database = "users"
username = "${USERS_DB_USERNAME}"
password = "${USERS_DB_PASSWORD}"
pool_size = 10
ssl_mode = "require"
# Orders database (MySQL)
[databases.orders_db]
driver = "mysql"
host = "orders.internal.example.com"
port = 3306
database = "orders"
username = "${ORDERS_DB_USERNAME}"
password = "${ORDERS_DB_PASSWORD}"
pool_size = 20
charset = "utf8mb4"
# Analytics database (SQL Server)
[databases.analytics_db]
driver = "sqlserver"
host = "analytics.internal.example.com"
port = 1433
database = "analytics"
username = "${ANALYTICS_DB_USERNAME}"
password = "${ANALYTICS_DB_PASSWORD}"
pool_size = 5
# Legacy SQLite database (file-based)
[databases.legacy_db]
driver = "sqlite"
path = "/data/legacy.db"
pool_size = 2

Store secrets securely:

.env
DB_USERNAME=fraiseql_user
DB_PASSWORD=secure_password_here
USERS_DB_USERNAME=users_service
USERS_DB_PASSWORD=users_password_here
ORDERS_DB_USERNAME=orders_service
ORDERS_DB_PASSWORD=orders_password_here
ANALYTICS_DB_USERNAME=analytics_service
ANALYTICS_DB_PASSWORD=analytics_password_here
[federation]
# Enable multi-database queries
enabled = true
# Timeout for entire federation query (not per-database)
timeout = "30s"
# Maximum parallel queries to different databases
max_concurrent_queries = 100
# Join strategy (see below)
join_strategy = "parallel"
# Query planning cache
planning_cache_enabled = true
planning_cache_size = 1000

Prevent cascading failures:

[federation.circuit_breaker]
enabled = true
# Open the circuit after this many consecutive failures
failure_threshold = 5
# Seconds to stay open before probing recovery
recovery_timeout_secs = 60
# Successful probes required to close the circuit
success_threshold = 2
# Per-database override (array of tables)
[[federation.circuit_breaker.per_database]]
database = "orders_db"
failure_threshold = 10
recovery_timeout_secs = 120

Handle transient failures:

[federation.retry]
# Maximum attempts per query
max_attempts = 3
# Backoff strategy: linear, exponential, or fibonacci
backoff = "exponential"
# Initial delay before first retry
initial_delay = "100ms"
# Maximum delay between retries
max_delay = "5s"
# Don't retry on these HTTP status codes
dont_retry_on = [400, 401, 403, 404]
# Per-database override
[federation.retry.per_database.analytics_db]
max_attempts = 5
backoff = "linear"
initial_delay = "50ms"
from fraiseql import type, federation
@type
@federation.database("users_db")
class User:
id: ID
email: str
name: str
created_at: datetime
@type
@federation.database("orders_db")
class Order:
id: ID
user_id: ID # Foreign key to users_db
total: float
created_at: datetime
@type
@federation.database("analytics_db")
class UserAnalytics:
user_id: ID
total_orders: int
lifetime_value: float

Link entities across databases:

from fraiseql import type, relationship
@type
@federation.database("users_db")
class User:
id: ID
email: str
# Join with orders_db.orders
orders: list["Order"] = relationship(
foreign_key="orders.user_id",
database="orders_db"
)
analytics: "UserAnalytics" = relationship(
foreign_key="user_analytics.user_id",
database="analytics_db"
)
@type
@federation.database("orders_db")
class Order:
id: ID
user_id: ID
total: float
# Back-reference to users_db
user: "User" = relationship(
foreign_key="user_id",
database="users_db"
)

When joining across databases, types must be compatible:

PostgreSQL UUID <-> MySQL VARCHAR(36) <-> SQL Server UNIQUEIDENTIFIER
PostgreSQL BIGINT <-> MySQL BIGINT <-> SQL Server BIGINT
PostgreSQL JSONB <-> MySQL JSON <-> SQL Server NVARCHAR(MAX)
PostgreSQL TIMESTAMP <-> MySQL DATETIME <-> SQL Server DATETIME2

FraiseQL handles type conversion automatically.

# High-traffic database: larger pool
[databases.orders_db]
driver = "mysql"
pool_size = 50 # Maximum connections
min_idle = 10 # Keep connections warm
max_lifetime = "30m" # Recycle connections
connection_timeout = "5s"
# Occasional analytics queries: small pool
[databases.analytics_db]
driver = "sqlserver"
pool_size = 5
min_idle = 0 # Don't keep idle connections
max_lifetime = "1h"
connection_timeout = "10s"
# Legacy system: minimal pool
[databases.legacy_db]
driver = "sqlite"
pool_size = 2

Monitor connection pool health:

from fraiseql.federation import get_federation_stats
stats = get_federation_stats()
print(f"Orders DB pool: {stats['orders_db'].active_connections}/50")
print(f"Analytics pool: {stats['analytics_db'].active_connections}/5")

Query all databases simultaneously:

[federation]
join_strategy = "parallel"
query {
user(id: "123") {
id
email
orders { # Queries orders_db in parallel
id
total
}
analytics { # Queries analytics_db in parallel
total_orders
lifetime_value
}
}
}

Execution timeline:

Time: 0ms Start
├─ PostgreSQL query: SELECT user
├─ MySQL query: SELECT orders WHERE user_id=123
└─ SQL Server query: SELECT analytics WHERE user_id=123
300ms ✓ All results returned

Query databases in order:

[federation]
join_strategy = "sequential"

Execution timeline:

Time: 0ms Start PostgreSQL query
100ms Get user, use ID for next query
100ms Start MySQL query (WHERE user_id=123)
200ms Get orders, use order_id for next query
200ms Start SQL Server query
400ms All results returned

Query based on previous results:

from fraiseql import type, field
@type
@federation.database("users_db")
class User:
id: ID
email: str
@field
async def orders(self) -> list["Order"]:
# Only fetch orders if user is premium
if not await self.is_premium():
return []
# Query orders_db
return await Order.query(user_id=self.id)
async def is_premium(self) -> bool:
# Check analytics_db
analytics = await UserAnalytics.query(user_id=self.id)
return analytics.lifetime_value > 10000

FraiseQL optimizes cross-database queries:

# Good: Parallel queries
query {
users { # PostgreSQL
id
name
orders { # MySQL - parallel fetch
total
}
}
}
# Avoid: Sequential nested queries
query {
users { # PostgreSQL
orders { # MySQL - wait for each user ID
items { # SQLite - wait for each order ID
details
}
}
}
}

Enable query caching:

[federation]
planning_cache_enabled = true
planning_cache_size = 1000
[federation.query_caching]
enabled = true
ttl = "1h"
max_size = "100MB"

Reduce round-trips with batch operations:

# Instead of N queries
for user_id in user_ids:
orders = await Order.query(user_id=user_id)
# Use batch API
orders_by_user = await Order.batch_query(
user_ids=user_ids,
index_by="user_id"
)
# Always join across databases (slow)
@type
class User:
order_count: int # Requires query to orders_db
# Cache in local database (fast)
@type
@federation.database("users_db")
class User:
order_count: int # Cached, updated by observer
@observer(entity="Order", event="INSERT")
def on_order_created():
# Update order count in users_db
pass
[databases.users_db]
username = "${VAULT:database/creds/users/username}"
password = "${VAULT:database/creds/users/password}"
[databases.orders_db]
username = "${VAULT:database/creds/orders/username}"
password = "${VAULT:database/creds/orders/password}"
# PostgreSQL with SSL/TLS
[databases.users_db]
driver = "postgresql"
ssl_mode = "require"
ssl_root_cert = "/etc/ssl/certs/ca.crt"
# MySQL with SSL
[databases.orders_db]
driver = "mysql"
ssl_mode = "REQUIRED"
ssl_ca = "/etc/ssl/certs/ca.crt"
ssl_cert = "/etc/ssl/certs/client.crt"
ssl_key = "/etc/ssl/private/client.key"
# SQL Server with encryption
[databases.analytics_db]
driver = "sqlserver"
encrypt = "true"
trust_server_certificate = false
[observability.prometheus]
enabled = true
port = 9090
# Per-database metrics
[observability.prometheus.federation]
track_per_database = true
track_query_plans = true

Exposed metrics:

fraiseql_federation_query_duration_seconds{database="orders_db"}
fraiseql_federation_query_errors_total{database="orders_db"}
fraiseql_federation_connection_pool_size{database="orders_db"}
fraiseql_federation_connection_pool_active{database="orders_db"}
fraiseql_federation_circuit_breaker_state{database="orders_db"}
[observability.tracing]
enabled = true
exporter = "jaeger" # or "datadog", "honeycomb"
service_name = "fraiseql-api"
[observability.tracing.federation]
trace_per_database = true
trace_joins = true
# Leverage LISTEN/NOTIFY for real-time updates
@type
@federation.database("users_db")
class User:
@subscription
async def on_user_created(self) -> "User":
async for user in User.listen_to("users:created"):
yield user
# Bulk operations
orders = await Order.batch_insert([
{"user_id": 1, "total": 100},
{"user_id": 2, "total": 200},
# ... 1000s of orders
], batch_size=1000)
Error: "users_db connection timeout"
Solution 1: Increase timeout
[federation]
timeout = "60s"
Solution 2: Check credentials
[databases.users_db]
username = "${USERS_DB_USERNAME}"
password = "${USERS_DB_PASSWORD}"
Solution 3: Verify network connectivity
Problem: Slow cross-database joins
Solution 1: Use parallel strategy
[federation]
join_strategy = "parallel"
Solution 2: Add database indexes on join columns
CREATE INDEX idx_orders_user_id ON orders(user_id);
Solution 3: Batch operations instead of N+1