Performance Tuning
Production configuration — connection pooling, cache settings, query optimisation
When Rolls-Royce was asked about the horsepower of their engine, the answer was “sufficient.” That is also the honest answer for FraiseQL’s throughput.
Independent data from VelocityBench — an open-source suite testing 35 frameworks under identical conditions — puts FraiseQL in the same performance tier as the fastest GraphQL frameworks available. The gap to the leader is within benchmark noise. Chasing that gap is not the point.
The point is where the complexity lives.
Source: VelocityBench 2026-02-21. Load tool: hey v0.1.5, 50 connections, 2,000 requests × 3 runs (median reported), PostgreSQL 15, localhost.
| Framework | Q1 — shallow list | Q3 — 3-level nested | Change |
|---|---|---|---|
| go-gqlgen | 30,994 RPS | 37,047 RPS | +20% |
| fraiseql | 29,672 RPS | 27,872 RPS | −6% |
| async-graphql | 22,842 RPS | 9,562 RPS | −58% |
| strawberry | 1,108 RPS | 708 RPS | −36% |
Q1 is a flat list query. Q3 is the N+1 stress test — users → posts → comments, three levels deep.
FraiseQL and go-gqlgen are within 5% on Q1. On Q3, go-gqlgen pulls ahead. async-graphql loses 58% of throughput under nesting despite hand-written DataLoaders. strawberry degrades across the board.
| Framework | Q1 p99 | Q3 p99 |
|---|---|---|
| fraiseql | 5.6 ms | 5.9 ms |
| go-gqlgen | 20.6 ms | 3.7 ms |
| async-graphql | 16.0 ms | 10.0 ms |
| strawberry | 91.5 ms | 115.3 ms |
FraiseQL’s p99 barely moves between query depths: 5.6 ms → 5.9 ms. This matters for SLA budgets — query depth does not blow your tail latency.
FraiseQL: 16 MB RSS after a full load test. go-gqlgen: 41 MB. strawberry: 52 MB.
The 5% throughput gap between FraiseQL and go-gqlgen is real. In production — behind a load balancer, with a connection pool, on queries shaped to your actual workload — it is not measurable. Both frameworks are fast enough that the database becomes the bottleneck long before the framework does.
What the benchmark cannot capture is what you had to build to get there.
go-gqlgen generates a schema-first Go skeleton and gives you a fast foundation. You then write DataLoaders, one per relation, to protect against N+1. You wire them into context middleware. You test them. You update them when the schema changes. The framework is fast because it gets out of the way — and that means the complexity is yours to carry.
FraiseQL resolves N+1 at the database level, before the application sees the query. A SQL view pre-composes the nested JSONB structure. The framework executes one SELECT data FROM v_post regardless of how deep the GraphQL query goes. There are no DataLoaders to write or maintain.
-- v_post.sql-- This IS the resolver. One view. One query at runtime. No DataLoader.CREATE OR REPLACE VIEW v_post ASSELECT p.id, p.title, p.content, jsonb_build_object( 'id', u.id, 'username', u.username ) AS author, COALESCE( jsonb_agg( jsonb_build_object('id', c.id, 'content', c.content) ORDER BY c.created_at ) FILTER (WHERE c.id IS NOT NULL), '[]' ) AS commentsFROM tb_post pJOIN tb_user u ON u.pk_user = p.fk_authorLEFT JOIN tb_comment c ON c.fk_post = p.pk_postGROUP BY p.pk_post, u.pk_user;A DBA can read this, run EXPLAIN ANALYZE on it, add an index, and redeploy — without touching application code.
// One DataLoader per relation — multiply by every relation in your schema.// When the schema changes, update this too.#[derive(Clone)]struct CommentsByPostLoader(Arc<Pool<Postgres>>);
#[async_trait::async_trait]impl Loader<Uuid> for CommentsByPostLoader { type Value = Vec<Comment>; type Error = sqlx::Error;
async fn load(&self, keys: &[Uuid]) -> Result<HashMap<Uuid, Vec<Comment>>, Self::Error> { let rows = sqlx::query_as!( Comment, "SELECT * FROM comments WHERE post_id = ANY($1)", keys as _ ) .fetch_all(&*self.0) .await?;
let mut map: HashMap<Uuid, Vec<Comment>> = HashMap::new(); for row in rows { map.entry(row.post_id).or_default().push(row); } Ok(map) }}// dataloader.go — one batch function per relation, wired into HTTP middleware.func NewLoaders(db *sql.DB) *Loaders { return &Loaders{ CommentsByPostID: dataloader.NewBatchedLoader( func(ctx context.Context, keys dataloader.Keys) []*dataloader.Result { ids := make([]string, len(keys)) for i, k := range keys { ids[i] = k.String() } rows, _ := db.QueryContext(ctx, `SELECT id, post_id, content FROM comments WHERE post_id = ANY($1::uuid[])`, pq.Array(ids)) // scan + group by post_id... }, dataloader.WithWait(2*time.Millisecond), ), // PostsByUserID: ..., // TagsByPostID: ..., }}Both approaches work. Both are fast. The question is where you want the complexity: in a SQL file that the database can explain, or in application code that only your compiler can verify.
All numbers on this page come from the VelocityBench public reports. Clone the repo and run make bench to reproduce them on your own hardware.
git clone https://github.com/fraiseql/velocitybenchcd velocitybenchmake bench FRAMEWORK=fraiseqlPerformance Tuning
Production configuration — connection pooling, cache settings, query optimisation
VelocityBench
github.com/fraiseql/velocitybench — reproduce these results yourself