Skip to content

File Storage

FraiseQL handles file uploads through a dedicated multipart HTTP endpoint managed by the Rust runtime. Uploaded files are stored to a configured backend and referenced in your GraphQL schema as ordinary str URL fields — no special scalar type is needed in your Python schema definition.

File uploads in FraiseQL work as follows:

  1. The client sends a standard multipart/form-data POST to a dedicated upload endpoint (not to /graphql)
  2. The Rust runtime validates the file, optionally processes images, and stores it to the configured backend
  3. The runtime returns a JSON response containing the file url and metadata
  4. Your GraphQL mutation stores that URL in the database via a normal fn_ function

The Python SDK has no Upload type. File URL fields in your schema are plain str.

The following backends are available:

BackendProviderFeature flag
localLocal filesystemAlways enabled
s3Amazon S3aws-s3 feature
gcsGoogle Cloud Storagegcs feature
azureAzure Blob Storageazure-blob feature
s3 (compatible)Cloudflare R2, MinIOaws-s3 feature
s3 (compatible)Scaleway Object Storageaws-s3 feature
s3 (compatible)OVHcloud Object Storageaws-s3 feature
s3 (compatible)Clever Cloud Cellaraws-s3 feature
s3 (compatible)Exoscale SOSaws-s3 feature
s3 (compatible)Infomaniak Swiss Backupaws-s3 feature

S3-compatible providers (Cloudflare R2, Scaleway, OVH, Clever Cloud, Exoscale, Infomaniak) use the s3 backend with a custom endpoint_env. See the S3-compatible configuration section below.

File storage is configured under [storage.*] (named storage backends) and [files.*] (named upload endpoints). Each upload endpoint references a storage backend by name.

# Define a named storage backend
[storage.default]
backend = "local"
base_path = "./uploads"
serve_path = "/files"
# Define a named upload endpoint that uses it
[files.avatars]
storage = "default"
max_size = "5MB"
allowed_types = ["image/jpeg", "image/png", "image/webp"]
validate_magic_bytes = true
public = true

Credentials are referenced by environment variable name — not interpolated directly into the TOML file.

# Define an S3 storage backend
[storage.primary]
backend = "s3"
region = "us-east-1"
bucket_env = "S3_BUCKET"
access_key_env = "AWS_ACCESS_KEY_ID"
secret_key_env = "AWS_SECRET_ACCESS_KEY"
public_url = "https://my-bucket.s3.amazonaws.com"
# Define upload endpoints
[files.avatars]
storage = "primary"
max_size = "5MB"
allowed_types = ["image/jpeg", "image/png", "image/webp"]
validate_magic_bytes = true
public = true
[files.documents]
storage = "primary"
max_size = "50MB"
allowed_types = ["application/pdf", "text/csv"]
validate_magic_bytes = true
public = false
url_expiry = "1h"
[storage.primary]
backend = "gcs"
bucket_env = "GCS_BUCKET"
credentials_file_env = "GOOGLE_APPLICATION_CREDENTIALS"
public_url = "https://storage.googleapis.com/my-bucket"
[storage.primary]
backend = "azure"
container_env = "AZURE_CONTAINER"
connection_string_env = "AZURE_STORAGE_CONNECTION_STRING"
public_url = "https://myaccount.blob.core.windows.net/my-container"

For S3-compatible services (MinIO, Cloudflare R2, Scaleway, OVH, Clever Cloud, Exoscale, Infomaniak), use the s3 backend with a custom endpoint:

[storage.minio]
backend = "s3"
bucket_env = "MINIO_BUCKET"
access_key_env = "MINIO_ACCESS_KEY"
secret_key_env = "MINIO_SECRET_KEY"
endpoint_env = "MINIO_ENDPOINT"
Terminal window
export MINIO_BUCKET=uploads
export MINIO_ACCESS_KEY=minioadmin
export MINIO_SECRET_KEY=minioadmin
export MINIO_ENDPOINT=http://minio:9000

The following fields are valid on a [files.*] section:

KeyTypeDefaultDescription
storagestring"default"Name of the [storage.*] backend to use
pathstring/files/{name}Upload endpoint path override
max_sizestring"10MB"Maximum file size ("5MB", "100KB", "1GB")
allowed_typesarraysee belowAllowed MIME types
validate_magic_bytesbooltrueVerify file content matches declared MIME type
publicbooltrueWhether files are publicly accessible
cachestringCache duration for public files (e.g., "7d")
url_expirystringExpiry for private file signed URLs (e.g., "1h")
scan_malwareboolfalseEnable malware scanning (requires external scanner)
processingsectionImage processing configuration
on_uploadsectionDatabase callback after upload

Default allowed MIME types (when allowed_types is not set): image/jpeg, image/png, image/webp, image/gif, application/pdf.

When uploading images, the Rust runtime can strip EXIF metadata, convert format, and generate named size variants.

[files.avatars]
storage = "primary"
max_size = "10MB"
allowed_types = ["image/jpeg", "image/png", "image/webp"]
validate_magic_bytes = true
[files.avatars.processing]
strip_exif = true
output_format = "webp"
quality = 85
[[files.avatars.processing.variants]]
name = "small"
width = 150
height = 150
mode = "fit"
[[files.avatars.processing.variants]]
name = "medium"
width = 400
height = 400
mode = "fit"

Valid mode values: fit, fill, crop.

The upload response includes a variants map with URLs for each generated size.

Store file metadata in your database using the trinity pattern. The url column holds the string URL returned by the upload endpoint.

CREATE TABLE tb_file (
pk_file BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
id UUID DEFAULT gen_random_uuid() UNIQUE NOT NULL,
identifier TEXT UNIQUE NOT NULL, -- e.g. storage key: "avatars/uuid.webp"
original_filename TEXT NOT NULL,
mime_type TEXT NOT NULL,
size_bytes BIGINT NOT NULL,
url TEXT NOT NULL,
storage_backend TEXT NOT NULL DEFAULT 'local',
fk_user BIGINT NOT NULL REFERENCES tb_user(pk_user),
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE UNIQUE INDEX idx_tb_file_id ON tb_file(id);
CREATE INDEX idx_tb_file_fk_user ON tb_file(fk_user);
CREATE INDEX idx_tb_file_mime_type ON tb_file(mime_type);
CREATE VIEW v_file AS
SELECT
f.id,
jsonb_build_object(
'id', f.id::text,
'identifier', f.identifier,
'original_filename', f.original_filename,
'mime_type', f.mime_type,
'size_bytes', f.size_bytes,
'url', f.url,
'storage_backend', f.storage_backend,
'created_at', f.created_at
) AS data
FROM tb_file f;
CREATE OR REPLACE FUNCTION fn_create_file(
p_identifier TEXT,
p_original_filename TEXT,
p_mime_type TEXT,
p_size_bytes BIGINT,
p_url TEXT,
p_storage_backend TEXT,
p_user_id UUID
)
RETURNS mutation_response
LANGUAGE plpgsql
AS $$
DECLARE
v_pk_user INTEGER;
v_file_id UUID := gen_random_uuid();
v_result mutation_response;
BEGIN
SELECT pk_user INTO v_pk_user FROM tb_user WHERE id = p_user_id;
IF v_pk_user IS NULL THEN
v_result.status := 'failed:not_found';
v_result.message := 'User not found';
RETURN v_result;
END IF;
INSERT INTO tb_file (
id, identifier, original_filename, mime_type,
size_bytes, url, storage_backend, fk_user
) VALUES (
v_file_id, p_identifier, p_original_filename, p_mime_type,
p_size_bytes, p_url, p_storage_backend, v_pk_user
);
v_result.status := 'success';
v_result.entity_id := v_file_id;
v_result.entity_type := 'File';
v_result.entity := (SELECT data FROM v_file WHERE id = v_file_id);
RETURN v_result;
END;
$$;

File URL fields are plain str in Python. There is no Upload type in the FraiseQL SDK.

import fraiseql
from fraiseql.scalars import ID, DateTime
@fraiseql.type
class File:
"""A stored file record."""
id: ID
identifier: str
original_filename: str
mime_type: str
size_bytes: int
url: str # The URL returned by the upload endpoint
storage_backend: str
created_at: DateTime
@fraiseql.error
class FileError:
message: str
code: str
@fraiseql.input
class CreateFileInput:
identifier: str
original_filename: str
mime_type: str
size_bytes: int
url: str
storage_backend: str
@fraiseql.query
def files(limit: int = 20, offset: int = 0) -> list[File]:
return fraiseql.config(sql_source="v_file")
@fraiseql.query
def file(id: ID) -> File | None:
return fraiseql.config(sql_source="v_file")
@fraiseql.mutation(
sql_source="fn_create_file",
operation="CREATE",
inject={"user_id": "jwt:sub"},
)
def create_file(input: CreateFileInput) -> File: ...

The client uploads the file first, then calls the GraphQL mutation to record it.

Send a multipart/form-data POST to the upload endpoint:

async function uploadFile(file) {
const formData = new FormData();
formData.append('file', file);
const response = await fetch('/files/avatars', {
method: 'POST',
headers: { 'Authorization': `Bearer ${token}` },
body: formData,
});
// Returns: { id, name, url, content_type, size, variants, created_at }
return response.json();
}

After upload succeeds, call the mutation with the returned URL:

mutation RecordFile($input: CreateFileInput!) {
createFile(input: $input) {
id
url
originalFilename
createdAt
}
}
{
"input": {
"identifier": "avatars/018e1234-....webp",
"originalFilename": "photo.jpg",
"mimeType": "image/jpeg",
"sizeBytes": 102400,
"url": "https://my-bucket.s3.amazonaws.com/avatars/018e1234-....webp",
"storageBackend": "s3"
}
}

Instead of a two-step flow, you can configure an automatic database callback that runs after each successful upload. The runtime calls the specified SQL function with the upload result.

[files.avatars.on_upload]
function = "fn_create_file"
mapping = { identifier = "key", original_filename = "original_filename", mime_type = "content_type", size_bytes = "size", url = "url" }

For private files (public = false), the runtime generates signed URLs with a configurable expiry:

[files.documents]
storage = "primary"
max_size = "50MB"
allowed_types = ["application/pdf"]
public = false
url_expiry = "1h"

Request a signed URL via the upload endpoint:

GET /files/documents/{key}/signed-url
Authorization: Bearer <token>

Response:

{
"url": "https://bucket.s3.amazonaws.com/documents/...?X-Amz-Signature=...",
"expires_at": "2026-03-02T11:00:00Z"
}

By default (validate_magic_bytes = true), FraiseQL reads the actual file content and verifies that the magic bytes match the declared MIME type. A file uploaded with Content-Type: image/jpeg that contains a ZIP header will be rejected.

The scan_malware = true flag enables integration with an external malware scanner via the MalwareScanner trait. No built-in scanner implementation is bundled — you must provide a custom implementation. The TOML flag activates the scanning hook; the scanner itself is registered programmatically.

[files.documents]
scan_malware = true

Use a cloud backend in production. The local backend stores files on disk alongside your application. It does not support real signed URLs and is not suitable for multi-instance deployments. Use S3, GCS, Azure Blob, or an S3-compatible European provider (Scaleway, OVH, Clever Cloud, Exoscale, Infomaniak) depending on your region and compliance requirements.

Keep validate_magic_bytes = true. This is the default and should not be disabled. Accepting files based only on the declared MIME type allows malicious uploads.

Set url_expiry for sensitive documents. If files should not be publicly accessible forever, set public = false and url_expiry to a duration appropriate for your use case.

Store the storage key as identifier. Use the storage key (e.g., "avatars/018e1234-....webp") as the identifier column in tb_file. This makes the file uniquely addressable without relying solely on the UUID.

Upload Rejected (415 Unsupported Media Type)

Section titled “Upload Rejected (415 Unsupported Media Type)”

The MIME type is not in allowed_types. Either add the type to the list or check that the client is sending the correct Content-Type header.

The file exceeds max_size. Increase the limit or compress the file before uploading.

The file extension or declared content type does not match the actual file content. Ensure the file is not corrupted and that the correct MIME type is declared.

  1. Verify the environment variables referenced by access_key_env and secret_key_env are set
  2. Check the IAM policy grants s3:PutObject, s3:GetObject, s3:DeleteObject on the bucket
  3. Verify the bucket name is correct via the bucket_env variable

File upload via REST is possible using multipart/form-data if the rest-transport feature includes multipart support. Otherwise, file operations remain GraphQL-only. Check release notes for current multipart status.

Security

Security — File access control and JWT scopes

Deployment

Deployment — Production S3 configuration