Skip to content

Performance

This guide covers performance optimization for FraiseQL, from database tuning to caching strategies.

Configure connection pools based on workload:

[database]
pool_min = 10 # Minimum connections
pool_max = 100 # Maximum connections

Sizing formula:

max_connections = (core_count * 2) + effective_spindle_count

For most deployments:

  • Low traffic: pool_max = 20
  • Medium traffic: pool_max = 50
  • High traffic: pool_max = 100+ (with PgBouncer)

FraiseQL routes GraphQL queries (reads) to the replica and mutations to the primary. Replica configuration is handled at the infrastructure level — use a DATABASE_REPLICA_URL environment variable or a connection pooler such as PgBouncer. There is no [database.replica] TOML section.

Terminal window
# Primary for mutations
DATABASE_URL=postgresql://primary-host/mydb
# Replica for reads (configure in your load balancer or pooler)
DATABASE_REPLICA_URL=postgresql://replica-host/mydb

Key PostgreSQL settings for FraiseQL:

-- Memory
shared_buffers = '4GB' -- 25% of RAM
effective_cache_size = '12GB' -- 75% of RAM
work_mem = '256MB' -- Per-operation memory
maintenance_work_mem = '1GB' -- For VACUUM, INDEX
-- Connections
max_connections = 200
superuser_reserved_connections = 3
-- Write performance
wal_buffers = '64MB'
checkpoint_completion_target = 0.9
max_wal_size = '4GB'
-- Query planner
random_page_cost = 1.1 -- For SSD
effective_io_concurrency = 200 -- For SSD
default_statistics_target = 200

Every table should have indexes on:

-- Trinity identifiers
CREATE UNIQUE INDEX idx_user_id ON tb_user(id);
CREATE UNIQUE INDEX idx_user_identifier ON tb_user(identifier);
-- Foreign keys (for JOINs)
CREATE INDEX idx_post_fk_user ON tb_post(fk_user);
CREATE INDEX idx_comment_fk_post ON tb_comment(fk_post);

Index JSONB fields used in filtering:

-- Full GIN index (supports all JSONB operations)
CREATE INDEX idx_tv_user_data ON tv_user USING GIN (data);
-- Specific field indexes (faster for specific lookups)
CREATE INDEX idx_tv_user_email ON tv_user ((data->>'email'));
CREATE INDEX idx_tv_user_active ON tv_user ((data->>'is_active'));
-- Composite expressions
CREATE INDEX idx_tv_post_published ON tv_post
((data->>'is_published'), (data->>'published_at') DESC);

Include data in index to avoid table lookups:

CREATE INDEX idx_tv_user_email_covering ON tv_user ((data->>'email'))
INCLUDE (data);

Index only relevant rows:

-- Only published posts
CREATE INDEX idx_tv_post_published_only ON tv_post ((data->>'published_at') DESC)
WHERE data->>'is_published' = 'true';
-- Only active users
CREATE INDEX idx_tv_user_active_only ON tv_user ((data->>'email'))
WHERE data->>'is_active' = 'true';

Keep JSONB payloads reasonably sized:

SizeImpact
< 1 KBOptimal
1-10 KBGood
10-100 KBAcceptable
> 100 KBConsider restructuring

Balance between:

  • Full denormalization: Fast reads, expensive writes
  • Minimal denormalization: Balanced reads/writes
-- Good: Include frequently-accessed related data
'author', jsonb_build_object('id', u.id, 'name', u.name)
-- Avoid: Including entire large objects
'author', vu.data -- If v_user.data is very large

Use efficient JSONB operations:

-- Good: Direct access
data->>'field'
-- Good: Path extraction
data #> '{nested,field}'
-- Avoid in WHERE: Complex transformations
jsonb_array_length(data->'items') -- Not indexable

Enable built-in caching:

[caching]
enabled = true
backend = "redis"
redis_url = "${REDIS_URL}"

FraiseQL automatically invalidates cache on mutations. When a mutation modifies an entity, all cached responses containing that entity type are invalidated.

Configure per-query rules:

[[caching.rules]]
query = "users_*"
ttl_seconds = 300
invalidation_triggers = ["User"]
[[caching.rules]]
query = "posts_*"
ttl_seconds = 60
invalidation_triggers = ["Post"]

Reduce query parsing overhead:

Clients send query hash instead of full query:

{
"extensions": {
"persistedQuery": {
"sha256Hash": "abc123..."
}
},
"variables": {"id": "..."}
}

Prevent deeply nested queries:

[validation]
max_query_depth = 8
# Allowed (depth 3)
query {
posts { # 1
author { # 2
name # 3
}
}
}
# Blocked (depth > 8)
query {
posts {
comments {
author {
posts {
comments { ... }
}
}
}
}
}

Set complexity limits:

[validation]
max_query_complexity = 1000

Complexity is calculated as:

  • Each field: +1
  • Each list: +10 × limit
  • Each nested object: +5

Always paginate list queries:

@fraiseql.query(
sql_source="v_post",
auto_params={"limit": True, "offset": True}
)
def posts(limit: int = 20, offset: int = 0) -> list[Post]:
pass
[graphql.pagination]
default_limit = 20
max_limit = 100

For bulk operations, sync once after all changes:

-- Bad: Sync after each insert
INSERT INTO tb_user...
PERFORM sync_tv_user(); -- Slow!
-- Good: Batch and sync once
INSERT INTO tb_user...
INSERT INTO tb_user...
INSERT INTO tb_user...
PERFORM sync_tv_user(); -- Once

For large tables, sync single records:

CREATE OR REPLACE FUNCTION sync_tv_user_single(user_id UUID) RETURNS VOID AS $$
BEGIN
DELETE FROM tv_user WHERE id = user_id;
INSERT INTO tv_user (id, data)
SELECT id, data FROM v_user WHERE id = user_id;
END;
$$ LANGUAGE plpgsql;

For non-critical updates, sync asynchronously:

-- Queue sync instead of immediate
INSERT INTO sync_queue (table_name, record_id)
VALUES ('tv_user', user_id);
-- Background worker processes queue

Enable PostgreSQL slow query logging:

ALTER SYSTEM SET log_min_duration_statement = 100; -- Log queries > 100ms

pg_stat_statements tracks execution statistics for all SQL statements. Enable it to identify your slowest queries:

-- Enable the extension (requires superuser, done once per database)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Find the top 10 slowest queries by average execution time
SELECT
query,
calls,
round(mean_exec_time::numeric, 2) AS avg_ms,
round(total_exec_time::numeric, 2) AS total_ms
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
-- Reset statistics after tuning
SELECT pg_stat_statements_reset();

You may need to set shared_preload_libraries = 'pg_stat_statements' in postgresql.conf and restart PostgreSQL before the extension is available.

Use EXPLAIN ANALYZE:

EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON)
SELECT data FROM tv_post
WHERE data->>'is_published' = 'true'
ORDER BY data->>'published_at' DESC
LIMIT 20;

Monitor these metrics:

MetricTargetAction if Exceeded
p99 latency< 100msAdd indexes, cache
DB connections< 80% maxIncrease pool or use PgBouncer
Cache hit rate> 90%Increase TTL, cache size
Query rateDependsScale horizontally
Terminal window
# Using wrk
wrk -t12 -c400 -d30s \
-s post.lua \
http://localhost:8080/graphql
# post.lua
wrk.method = "POST"
wrk.headers["Content-Type"] = "application/json"
wrk.body = '{"query": "{ users(limit: 20) { id name } }"}'

FraiseQL illustrative baselines (single instance, 4 cores, 16GB RAM):

Query TypeRPSp50p99
Simple read50,000+1ms5ms
Nested read (depth 3)20,000+3ms15ms
List + filter15,000+5ms25ms
Mutation5,000+10ms50ms
  • Connection pool sized correctly
  • Read replica configured
  • PostgreSQL tuned for workload
  • Indexes on all filtered columns
  • JSONB GIN indexes on projection tables
  • Partial indexes for common filters
  • Response caching enabled
  • APQ enabled
  • Cache invalidation configured
  • Redis/memory sized appropriately
  • Max depth configured
  • Max complexity configured
  • Pagination enforced
  • No N+1 queries (FraiseQL handles this)
  • Horizontal scaling configured
  • Health checks active
  • Metrics collection enabled
  • Slow query logging enabled

Performance Benchmarks

Detailed benchmark results and methodology for FraiseQL performance claims. Performance Benchmarks

Schema Design

Design your GraphQL schema for optimal query performance. Schema Design

Caching

Advanced caching strategies including edge caching and CDN integration. Caching

Scaling

Horizontal scaling, load balancing, and deployment topologies. Scaling