Performance Benchmarks
Detailed benchmark results and methodology for FraiseQL performance claims. Performance Benchmarks
This guide covers performance optimization for FraiseQL, from database tuning to caching strategies.
Configure connection pools based on workload:
[database]pool_min = 10 # Minimum connectionspool_max = 100 # Maximum connectionsSizing formula:
max_connections = (core_count * 2) + effective_spindle_countFor most deployments:
pool_max = 20pool_max = 50pool_max = 100+ (with PgBouncer)FraiseQL routes GraphQL queries (reads) to the replica and mutations to the primary. Replica configuration is handled at the infrastructure level — use a DATABASE_REPLICA_URL environment variable or a connection pooler such as PgBouncer. There is no [database.replica] TOML section.
# Primary for mutationsDATABASE_URL=postgresql://primary-host/mydb
# Replica for reads (configure in your load balancer or pooler)DATABASE_REPLICA_URL=postgresql://replica-host/mydbKey PostgreSQL settings for FraiseQL:
-- Memoryshared_buffers = '4GB' -- 25% of RAMeffective_cache_size = '12GB' -- 75% of RAMwork_mem = '256MB' -- Per-operation memorymaintenance_work_mem = '1GB' -- For VACUUM, INDEX
-- Connectionsmax_connections = 200superuser_reserved_connections = 3
-- Write performance wal_buffers = '64MB'checkpoint_completion_target = 0.9max_wal_size = '4GB'
-- Query plannerrandom_page_cost = 1.1 -- For SSDeffective_io_concurrency = 200 -- For SSDdefault_statistics_target = 200Every table should have indexes on:
-- Trinity identifiersCREATE UNIQUE INDEX idx_user_id ON tb_user(id);CREATE UNIQUE INDEX idx_user_identifier ON tb_user(identifier);
-- Foreign keys (for JOINs)CREATE INDEX idx_post_fk_user ON tb_post(fk_user);CREATE INDEX idx_comment_fk_post ON tb_comment(fk_post);Index JSONB fields used in filtering:
-- Full GIN index (supports all JSONB operations)CREATE INDEX idx_tv_user_data ON tv_user USING GIN (data);
-- Specific field indexes (faster for specific lookups)CREATE INDEX idx_tv_user_email ON tv_user ((data->>'email'));CREATE INDEX idx_tv_user_active ON tv_user ((data->>'is_active'));
-- Composite expressionsCREATE INDEX idx_tv_post_published ON tv_post ((data->>'is_published'), (data->>'published_at') DESC);Include data in index to avoid table lookups:
CREATE INDEX idx_tv_user_email_covering ON tv_user ((data->>'email')) INCLUDE (data);Index only relevant rows:
-- Only published postsCREATE INDEX idx_tv_post_published_only ON tv_post ((data->>'published_at') DESC) WHERE data->>'is_published' = 'true';
-- Only active usersCREATE INDEX idx_tv_user_active_only ON tv_user ((data->>'email')) WHERE data->>'is_active' = 'true';Keep JSONB payloads reasonably sized:
| Size | Impact |
|---|---|
| < 1 KB | Optimal |
| 1-10 KB | Good |
| 10-100 KB | Acceptable |
| > 100 KB | Consider restructuring |
Balance between:
-- Good: Include frequently-accessed related data'author', jsonb_build_object('id', u.id, 'name', u.name)
-- Avoid: Including entire large objects'author', vu.data -- If v_user.data is very largeUse efficient JSONB operations:
-- Good: Direct accessdata->>'field'
-- Good: Path extractiondata #> '{nested,field}'
-- Avoid in WHERE: Complex transformationsjsonb_array_length(data->'items') -- Not indexableEnable built-in caching:
[caching]enabled = truebackend = "redis"redis_url = "${REDIS_URL}"FraiseQL automatically invalidates cache on mutations. When a mutation modifies an entity, all cached responses containing that entity type are invalidated.
Configure per-query rules:
[[caching.rules]]query = "users_*"ttl_seconds = 300invalidation_triggers = ["User"]
[[caching.rules]]query = "posts_*"ttl_seconds = 60invalidation_triggers = ["Post"]Reduce query parsing overhead:
Clients send query hash instead of full query:
{ "extensions": { "persistedQuery": { "sha256Hash": "abc123..." } }, "variables": {"id": "..."}}Prevent deeply nested queries:
[validation]max_query_depth = 8# Allowed (depth 3)query { posts { # 1 author { # 2 name # 3 } }}
# Blocked (depth > 8)query { posts { comments { author { posts { comments { ... } } } } }}Set complexity limits:
[validation]max_query_complexity = 1000Complexity is calculated as:
Always paginate list queries:
@fraiseql.query( sql_source="v_post", auto_params={"limit": True, "offset": True})def posts(limit: int = 20, offset: int = 0) -> list[Post]: pass[graphql.pagination]default_limit = 20max_limit = 100For bulk operations, sync once after all changes:
-- Bad: Sync after each insertINSERT INTO tb_user...PERFORM sync_tv_user(); -- Slow!
-- Good: Batch and sync onceINSERT INTO tb_user...INSERT INTO tb_user...INSERT INTO tb_user...PERFORM sync_tv_user(); -- OnceFor large tables, sync single records:
CREATE OR REPLACE FUNCTION sync_tv_user_single(user_id UUID) RETURNS VOID AS $$BEGIN DELETE FROM tv_user WHERE id = user_id; INSERT INTO tv_user (id, data) SELECT id, data FROM v_user WHERE id = user_id;END;$$ LANGUAGE plpgsql;For non-critical updates, sync asynchronously:
-- Queue sync instead of immediateINSERT INTO sync_queue (table_name, record_id)VALUES ('tv_user', user_id);
-- Background worker processes queueEnable PostgreSQL slow query logging:
ALTER SYSTEM SET log_min_duration_statement = 100; -- Log queries > 100mspg_stat_statements tracks execution statistics for all SQL statements. Enable it to identify your slowest queries:
-- Enable the extension (requires superuser, done once per database)CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Find the top 10 slowest queries by average execution timeSELECT query, calls, round(mean_exec_time::numeric, 2) AS avg_ms, round(total_exec_time::numeric, 2) AS total_msFROM pg_stat_statementsORDER BY mean_exec_time DESCLIMIT 10;
-- Reset statistics after tuningSELECT pg_stat_statements_reset();You may need to set shared_preload_libraries = 'pg_stat_statements' in postgresql.conf and restart PostgreSQL before the extension is available.
Use EXPLAIN ANALYZE:
EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON)SELECT data FROM tv_postWHERE data->>'is_published' = 'true'ORDER BY data->>'published_at' DESCLIMIT 20;Monitor these metrics:
| Metric | Target | Action if Exceeded |
|---|---|---|
| p99 latency | < 100ms | Add indexes, cache |
| DB connections | < 80% max | Increase pool or use PgBouncer |
| Cache hit rate | > 90% | Increase TTL, cache size |
| Query rate | Depends | Scale horizontally |
# Using wrkwrk -t12 -c400 -d30s \ -s post.lua \ http://localhost:8080/graphql
# post.luawrk.method = "POST"wrk.headers["Content-Type"] = "application/json"wrk.body = '{"query": "{ users(limit: 20) { id name } }"}'FraiseQL illustrative baselines (single instance, 4 cores, 16GB RAM):
| Query Type | RPS | p50 | p99 |
|---|---|---|---|
| Simple read | 50,000+ | 1ms | 5ms |
| Nested read (depth 3) | 20,000+ | 3ms | 15ms |
| List + filter | 15,000+ | 5ms | 25ms |
| Mutation | 5,000+ | 10ms | 50ms |
Performance Benchmarks
Detailed benchmark results and methodology for FraiseQL performance claims. Performance Benchmarks
Schema Design
Design your GraphQL schema for optimal query performance. Schema Design
Caching
Advanced caching strategies including edge caching and CDN integration. Caching
Scaling
Horizontal scaling, load balancing, and deployment topologies. Scaling