Skip to content

Caching

FraiseQL supports response caching for GraphQL queries. Caching can reduce database load dramatically for read-heavy workloads.

FraiseQL supports three caching backends:

BackendUse CasePersistence
redisProduction, multi-instanceSurvives restarts, shared across instances
memoryDevelopment, single-instancePer-process, lost on restart
postgresqlFallback when Redis unavailableStored in database table
  1. Start a Redis instance

    Terminal window
    docker run -d --name redis -p 6379:6379 redis:7-alpine
  2. Configure the [fraiseql.caching] block

    [fraiseql.caching]
    enabled = true
    backend = "redis"
    redis_url = "redis://localhost:6379"
    max_memory_entries = 1000
  3. Start FraiseQL

    Terminal window
    fraiseql run

    On startup, FraiseQL connects to Redis and logs confirmation that caching is active.

For local development or single-instance deployments:

[fraiseql.caching]
enabled = true
backend = "memory"
max_memory_entries = 1000

The in-memory backend requires no external services but:

  • Cache is lost when FraiseQL restarts
  • Each instance has independent cache in multi-instance setups
  • Limited by available heap memory

Cache TTL is configured per query in your Python schema using the cache_ttl_seconds parameter:

import fraiseql
@fraiseql.query(
sql_source="v_post",
cache_ttl_seconds=300 # Cache this query for 5 minutes
)
def posts(category: str | None = None) -> list[Post]:
"""List posts, optionally filtered by category."""
pass
@fraiseql.query(
sql_source="v_user",
cache_ttl_seconds=60 # Short TTL — user data changes frequently
)
def user_profile(user_id: fraiseql.ID) -> User | None:
pass
@fraiseql.query(
sql_source="v_config",
cache_ttl_seconds=3600 # 1-hour TTL — config rarely changes
)
def site_config() -> SiteConfig:
pass

A cache_ttl_seconds of 0 disables caching for that query entirely (useful to opt out when the global default is set).

Data typeRecommended TTLRationale
Public static content (site config, feature flags)3600 s (1 hour)Changes rarely, safe to cache long
Product listings, post feeds300 s (5 min)Moderate freshness requirement
User-specific data60 s (1 min)May change across sessions
Real-time or financial dataNo cachingStaleness is not acceptable

FraiseQL invalidates cache entries automatically when mutations change data. When a mutation runs, FraiseQL detects which SQL views the mutation affects and purges cached responses for queries that read from those views.

Declare which views a mutation invalidates using the invalidates decorator parameter:

@fraiseql.mutation(
fn_name="create_post",
invalidates=["v_post"] # Purge cache entries for queries backed by v_post
)
def create_post(input: PostInput) -> PostResult:
pass

When createPost executes:

  1. FraiseQL runs the mutation function call
  2. Purges all cache entries for queries backed by the listed views
  3. Subsequent queries fetch fresh data from the database

In some cases, you may need to invalidate cache manually (e.g., administrative changes):

from fraiseql import cache
# Invalidate all entries matching a pattern
await cache.invalidate_pattern("posts:*")
# Invalidate specific entity cache
await cache.invalidate_entity("Post", post_id="123")

All caching settings can be overridden via environment variables:

VariableOverrides
FRAISEQL_CACHING_ENABLEDfraiseql.caching.enabled
FRAISEQL_CACHING_BACKENDfraiseql.caching.backend
FRAISEQL_REDIS_URLfraiseql.caching.redis_url

Example:

Terminal window
export FRAISEQL_CACHING_ENABLED=true
export FRAISEQL_REDIS_URL=redis://prod-redis:6379
fraiseql run

FraiseQL exposes Prometheus metrics for cache performance at the /metrics endpoint:

MetricTypeDescription
fraiseql_cache_hits_totalCounterTotal cache hits across all queries
fraiseql_cache_misses_totalCounterTotal cache misses
fraiseql_cache_invalidations_totalCounterTotal cache entries invalidated

Cache hit rate over a 5-minute window:

sum(rate(fraiseql_cache_hits_total[5m]))
/
(
sum(rate(fraiseql_cache_hits_total[5m]))
+ sum(rate(fraiseql_cache_misses_total[5m]))
)

A healthy read-heavy API typically achieves a hit rate above 80%. Values below 50% indicate that:

  • TTLs are too short relative to request frequency
  • Mutations are invalidating entries too aggressively
  • The query patterns don’t match the cache rules

Invalidation rate relative to hits:

rate(fraiseql_cache_invalidations_total[5m])
/
rate(fraiseql_cache_hits_total[5m])

If this ratio exceeds 0.1 (10%), review your invalidation triggers — they may be too broad.

“Caching enabled but no cache hits”

Verify caching is active and rules match your queries:

Terminal window
# Check caching is enabled
grep -A5 '\[fraiseql.caching\]' fraiseql.toml
# Verify your queries have cache_ttl_seconds set in the Python schema
grep cache_ttl_seconds schema/*.py

“Cache invalidation not working”

  1. Check that the mutation returns the correct entity type name
  2. Verify the invalidates list on the mutation includes that exact view name (case-sensitive)
  3. Ensure the entity name in the rule matches the name in your schema

“Out of memory with in-memory cache”

The in-memory backend has no automatic eviction. For production use cases, switch to Redis:

[fraiseql.caching]
backend = "redis"
redis_url = "redis://localhost:6379"

“Redis connection errors”

Verify Redis is accessible:

Terminal window
redis-cli -h localhost -p 6379 ping
# Should return: PONG

Check the connection URL format:

# Standard format
redis_url = "redis://localhost:6379"
# With password
redis_url = "redis://:password@localhost:6379"
# With database number
redis_url = "redis://localhost:6379/0"

Use specific invalidation views. Broad invalidation reduces cache effectiveness:

# Good: Only invalidate the view this mutation affects
@fraiseql.mutation(fn_name="create_post", invalidates=["v_post"])
# Avoid: Invalidating unrelated views
@fraiseql.mutation(fn_name="create_post", invalidates=["v_post", "v_user", "v_config"])

Choose TTLs based on data change frequency. Static data can cache longer; frequently changing data needs shorter TTLs or no caching.

Use Redis for production. The in-memory backend is convenient for development but unsuitable for production multi-instance deployments.

Monitor hit rates. Alert when cache hit rate drops below your threshold (typically 70-80%).

Test invalidation. After implementing caching, verify that mutations correctly invalidate cache entries by:

  1. Executing a cached query (should hit cache)
  2. Running a mutation that should invalidate it
  3. Executing the same query again (should hit database, not cache)

Current caching implementation has these limitations:

  1. No cache key customization: Cache keys are automatically derived from query name and arguments. You cannot customize the key pattern.
  2. No per-query cache headers: Unlike some GraphQL servers, FraiseQL does not add X-Cache-Status headers to responses.
  3. No cache warming: There is no built-in mechanism to pre-populate the cache.
  4. PostgreSQL backend limitations: The PostgreSQL cache backend is slower than Redis and not recommended for high-throughput scenarios.

Observers

Observers — Trigger cache invalidation via database events

Performance

Performance Guide — N+1 elimination, projection tables, and query optimization