Files And Network
Model file fields in Inspire and declare outbound network policy for your application runtime.
These two features live together because they both define part of the application boundary:
filedetermines how uploaded bytes become part of your data modelnetworkdetermines which external systems your server-side code can reach
The file side is usually the more conceptually important one, because many stacks accidentally turn storage into a second authorization system. Kizaki’s model is intentionally the opposite: a file belongs to an entity, and the entity remains the source of truth.
File Fields
The Core Model
A file is not a bucket path with some extra helpers around it. In Kizaki, a file is a field on an entity.
entity Attachment {
name: string,
file: file,
ownerId: __User.id,
@grant read, write, delete where resource.ownerId == principal.id
}That one decision drives the whole system:
- authorization comes from the entity policy
- lifecycle comes from the entity lifecycle
- namespace scoping comes from the entity namespace
- cleanup happens because the entity field changed or disappeared
The object store is an implementation detail. In configured environments today, the platform uses S3-style presigned URLs and metadata storage behind the scenes, but the product model is still “this entity has a file field.”
Declaration Patterns
Use a file field when one entity owns one uploaded file.
entity UserProfile {
displayName: string,
avatar: file?,
userId: __User.id,
@grant read to *
@grant write where resource.userId == principal.id
}That is the right shape for:
- profile avatars
- document bodies
- exported reports
- one-off attachments
One File Per Field
A single file field holds one file.
If you need many files, do not model that as file[]. Arrays of file are not supported. The compiler explicitly rejects them. Use a separate entity instead.
entity TaskAttachment {
taskId: Task.id @onDelete(cascade),
file: file,
uploadedBy: __User.id,
label: string?,
@grant read to * via TeamMembership
where TeamMembership.teamId == resource.task.teamId
@grant insert where resource.task.createdBy == principal.id
@grant delete where resource.uploadedBy == principal.id
}That pattern is important because a collection of files usually needs its own identity and lifecycle:
- ordering
- labels
- upload audit data
- version history
- per-file deletion
If it needs those things, it should be an entity.
What The Runtime Stores
The current runtime stores file metadata as JSONB on the owning row and keeps pending uploads in a system table while the upload is in flight.
At a high level, the metadata includes:
- storage key
- file size
- MIME type
- original filename
- upload timestamp
You should think of that metadata as internal runtime state rather than a hand-authored JSON blob. Application code works with the typed file field, not with raw JSONB internals.
The Upload Lifecycle
The file upload path is a three-step handshake in the current runtime:
- your app asks the platform for an upload URL
- the client uploads directly to object storage
- your app confirms the upload, and the platform writes metadata onto the entity field
That separation is the key design choice. File bytes do not travel through your application server during the normal browser upload path.
Step 1: Request Upload
The platform issues a principal-bound upload token plus a presigned upload URL. The token is associated with:
- the entity type
- the file field
- the expected file info
- the principal that initiated the upload
The important property is that upload permission is checked before the upload begins, and the pending upload is bound to that principal rather than becoming a free-floating object-store write.
Step 2: Direct Upload
The client uploads the bytes directly to the backing store using the presigned URL returned by the platform.
This matters for both performance and architecture:
- large files do not bloat the application server path
- your server code does not need raw bucket credentials
- upload throughput is constrained by storage infrastructure rather than your function runtime
Step 3: Confirm Upload
After the client finishes uploading, the platform confirms the upload and turns it into entity state.
This is the point where the runtime:
- verifies the pending upload token
- rechecks that the principal is still allowed to complete the write
- validates the uploaded object metadata
- writes the file metadata JSONB onto the entity row
That last point is why the confirmation step matters. Until confirmation succeeds, the entity has not actually adopted the file.
Download Flow
The download path is much simpler conceptually:
- the principal reads an entity row they are allowed to see
- the platform rechecks file-read access when generating the download URL
- the client receives a presigned download URL
The current runtime uses presigned download URLs for configured apps and a mock provider in dev.
Authorization
File Authorization Is Entity Authorization
This is the core rule: there is no second authorization system for file storage.
If a user can read the entity field, they can obtain the file metadata and download URL. If they cannot read the field, they do not get the file. If they can update the entity field, they can replace the file. If they can delete the entity, the associated file lifecycle follows that deletion.
That means the normal policy patterns all work naturally:
- ownership
- role-based access
- shared access through
via - public access
- field-level grants
Upload And Replace Authorization
Uploading or replacing a file follows insert/update authorization on the owning entity.
That means:
- a user who cannot write the row cannot upload or replace the file
- a field-level write restriction can narrow which file fields are writable
- confirmation re-enters the normal policy path instead of bypassing it
Read Authorization
Downloading a file follows read authorization on the entity and field.
This also composes with field-level grants. If a role can read some fields on the entity but not the file field, the file is simply not part of what that principal receives.
Delete Authorization
There is no independent “delete file” permission surface. Files are deleted because:
- a nullable file field is cleared
- a file field is replaced
- the owning entity is deleted
- a cascading delete removes the owning row
That keeps file lifecycle attached to data lifecycle instead of inventing another set of destructive permissions to reason about.
Lifecycle And Cleanup
Files are not immortal storage objects. They are runtime assets owned by rows.
Replacement
When a file field is replaced, the runtime writes the new metadata and cleans up the old object. You should think of replacement as “this field now points at a different file,” not as accumulating historical file objects forever.
Entity Deletion
When the owning row is deleted, the associated file object is deleted too.
Cascade Deletion
If file-owning rows are deleted through normal FK cascade behavior, file cleanup follows the cascade. That is exactly why “one file per entity row” composes well with the rest of the schema model.
Orphan Handling
The runtime also tracks pending uploads separately in __file_pending. That exists specifically because uploads are multi-step operations. A file can be uploaded but not yet adopted by a row. Pending upload tracking is what lets the platform treat those as controlled in-flight state rather than permanent orphaned objects.
Namespaces
File fields on namespaced entities follow the same namespace rules as all other entity fields.
This matters because multi-tenant file security should not be a special case. If an entity is tenant-scoped, its files are tenant-scoped. Cross-tenant file access therefore requires the same kind of explicit elevated path as cross-tenant data access in general.
Quotas And Operational Limits
Files also interact naturally with the quota story:
- uploads consume storage
- downloads consume bandwidth
- pending uploads need cleanup discipline
The important mental model is that files are part of the application resource model, not a detached external system that escapes your quotas and operational accounting.
Current Runtime Shape
The current repo truth for file storage is:
- metadata-backed
filefields - JSONB file metadata on the owning row
- a 3-step upload handshake
- principal-bound pending uploads in
__file_pending - presigned upload and download URLs
- MIME validation in the runtime
- entity-policy rechecks on confirm and download
- mock provider in dev, S3 presigner in configured environments
That means the infrastructure path is real, but some higher-level ergonomics and public annotation surfaces are still settling. The part to build your mental model around today is the ownership and lifecycle model, not the exact future helper API shape.
Recommended Modeling Pattern
- use
filewhen one row owns one uploaded file - use a separate attachment entity for collections of files
- keep all authorization on the owning entity, not in an imagined storage policy layer
- treat upload confirmation as the moment the entity truly adopts the file
- let entity deletion and cascade deletion drive file cleanup
Network Policy
network controls which external services server-side code is allowed to reach.
network {
allow: [
"api.stripe.com",
"hooks.slack.com",
],
dynamic: false,
}Use network when server-side code needs outbound access to external services. This keeps network access explicit at the schema level instead of leaving it as an accidental side effect of whatever libraries happen to be imported.
The reason files and network appear together in the reference is that both define what crosses the system boundary:
- uploaded bytes entering the app
- outbound requests leaving the app
Related guide: File Storage