SQL Database Schema Management
Managing SQL database schemas with Celerity
Celerity makes database schemas a first-class concern, bridging the gap between development and data teams by providing a single source of truth for database structure. Instead of managing migrations separately from your application infrastructure, Celerity integrates schema management directly into the development and deployment lifecycle.
NoSQL Datastores
This page covers schema management for SQL databases (celerity/sqlDatabase). For NoSQL data stores (celerity/datastore), see NoSQL Datastore Schema Management.
Feature Availability
- ✅ Available in v0 - Features currently supported
- 🔄 Planned for v0 - Features coming in future v0 evolution
- 🚀 Planned for v1 - Features coming in v1 release
How It Works
Schema definitions are declarative YAML files that describe the desired state of your database. Celerity's migration engine computes the difference between the current state and the desired state, then generates an imperative sequence of SQL statements to transition the database safely.
Schema YAML (desired state) ──► Migration Engine ──► DDL Statements ──► Database
▲
Current state from
previous deploymentThis approach gives you the benefits of declarative schema definition (readable, diffable, version-controlled) with the safety of imperative migrations (ordered operations, data-aware transitions, migration scripts for complex cases).
Schema Definition Format
Project File Structure
✅ Available in v0
A typical Celerity project with schema management follows this structure:
my-app/
├── .celerity/ # Generated only (merged blueprint, compose, logs)
├── app.blueprint.yaml # Main blueprint — references schema files
├── app.deploy.jsonc # Deploy target config
├── config/
│ ├── local/ # Plaintext app config for local development
│ └── test/ # Plaintext app config for testing
├── secrets/
│ ├── local/ # Secrets for local development
│ └── test/ # Secrets for testing
├── seed/
│ ├── local/ # Seed data for local development
│ └── test/ # Seed data for testing
├── schema-contracts.yaml # Data team dependency contracts (optional)
├── schemas/
│ ├── orders.yaml # SQL schema for ordersDb resource
│ ├── analytics.yaml # SQL schema for analyticsDb resource
│ └── user-store.yaml # NoSQL schema for userStore resource (if applicable)
├── sql/
│ └── orders-db/ # Migration SQL (per SQL database resource)
│ ├── V001__add_audit_trigger.up.sql
│ ├── V001__add_audit_trigger.down.sql
│ ├── V002__add_gin_index_on_line_items.up.sql
│ └── V002__add_gin_index_on_line_items.down.sql
├── scripts/
│ └── user-store/ # Escape hatch data scripts (per NoSQL datastore, if applicable)
│ └── V001__backfill_status.py
├── src/
│ └── ...
└── generated/ # Optional: codegen output
└── ...SQL and NoSQL schemas coexist
A single project can contain both SQL database and NoSQL datastore schema files. They share the same schemas/ directory and the same schema-contracts.yaml file. See NoSQL Datastore Schema Management for the NoSQL schema format.
The blueprint references schema files via the schemaPath field on celerity/sqlDatabase resources:
version: 2025-11-02
transform: celerity-2026-02-27-draft
resources:
ordersDb:
type: "celerity/sqlDatabase"
metadata:
displayName: "Orders Database"
labels:
application: "orders"
spec:
engine: "postgres"
name: "orders"
schemaPath: "./schemas/orders.yaml"
migrationsPath: "./sql/orders-db"Schema YAML Format
✅ Available in v0
Schema files define the desired state of a database using engine-native column types. One schema file per celerity/sqlDatabase resource. The example below uses PostgreSQL types and functions; a MySQL schema would use MySQL-native equivalents (e.g. char(36) instead of uuid, json instead of jsonb, datetime instead of timestamptz, uuid() instead of gen_random_uuid()).
# schemas/orders.yaml
tables:
customers:
description: "Customer accounts. One row per registered customer."
owner: "orders-team"
tags: ["pii", "core-entity"]
columns:
id:
type: "uuid"
primaryKey: true
default: "gen_random_uuid()"
description: "Unique customer identifier"
email:
type: "varchar(255)"
nullable: false
unique: true
description: "Customer email. Used for login and notifications."
classification: "pii"
name:
type: "varchar(255)"
nullable: false
description: "Display name"
classification: "pii"
tier:
type: "varchar(20)"
nullable: false
default: "'free'"
description: "Subscription tier. One of: free, pro, enterprise."
created_at:
type: "timestamptz"
nullable: false
default: "now()"
description: "Account creation timestamp (UTC)"
indexes:
- name: "idx_customers_email"
columns: ["email"]
unique: true
- name: "idx_customers_tier"
columns: ["tier"]
orders:
description: "Customer orders. Immutable after creation — updates go to order_events."
owner: "orders-team"
tags: ["financial", "core-entity"]
columns:
id:
type: "uuid"
primaryKey: true
default: "gen_random_uuid()"
description: "Unique order identifier"
customer_id:
type: "uuid"
nullable: false
description: "Purchasing customer"
references:
table: "customers"
column: "id"
onDelete: "cascade"
status:
type: "varchar(50)"
nullable: false
default: "'pending'"
description: "Order lifecycle status: pending → confirmed → shipped → delivered | cancelled"
total_cents:
type: "integer"
nullable: false
description: "Order total in USD cents. Always >= 0."
tags: ["financial-metric"]
line_items:
type: "jsonb"
nullable: false
default: "'[]'::jsonb"
description: "Array of {product_id, quantity, unit_price_cents}"
created_at:
type: "timestamptz"
nullable: false
default: "now()"
updated_at:
type: "timestamptz"
nullable: false
default: "now()"
indexes:
- name: "idx_orders_customer"
columns: ["customer_id"]
- name: "idx_orders_status_created"
columns: ["status", "created_at"]
- name: "idx_orders_line_items"
columns: ["line_items"]
type: "gin"
constraints:
- type: "check"
name: "chk_orders_total_positive"
expression: "total_cents >= 0"
extensions:
- "uuid-ossp"Column Types
Column types are engine-native strings with no abstraction layer across PostgreSQL and MySQL. The engine field in the blueprint locks the dialect, so you use the exact types your database supports.
For PostgreSQL, common types include: varchar(n), text, integer, bigint, boolean, uuid, jsonb, timestamptz, numeric(p,s), bytea, interval, cidr, inet.
For MySQL, common types include: varchar(n), text, int, bigint, boolean, char(36), json, datetime, timestamp, decimal(p,s), blob, enum(...), set(...). 🚀 MySQL support is planned for v1.
Indexes
Indexes are defined at the table level with a name, columns list and optional type and unique fields.
| Field | Type | Description |
|---|---|---|
name | string | Index name (must be unique within the database) |
columns | array[string] | Columns included in the index |
type | string | Index type. Defaults to btree. PostgreSQL supports btree, gin, gist, hash. MySQL supports btree, hash, fulltext, spatial. |
unique | boolean | Whether the index enforces uniqueness. Defaults to false. |
Foreign Keys
Foreign keys are defined inline on columns using the references field:
customer_id:
type: "uuid"
nullable: false
references:
table: "customers"
column: "id"
onDelete: "cascade" # cascade | restrict | set null | set default | no actionConstraints
Table-level constraints beyond foreign keys:
constraints:
- type: "check"
name: "chk_orders_total_positive"
expression: "total_cents >= 0"Extensions
Extensions that should be enabled in the database. This is primarily a PostgreSQL feature. PostgreSQL supports extensions like uuid-ossp, pg_trgm, pgcrypto, etc. MySQL does not have an equivalent extension system; this field is ignored for MySQL databases.
# PostgreSQL example
extensions:
- "uuid-ossp"
- "pg_trgm"Rich Metadata
The following fields have no effect on DDL. They exist for documentation, data governance and tooling:
| Field | Applies to | Description |
|---|---|---|
description | Tables, Columns | Human-readable description of the table or column |
owner | Tables | Team or individual that owns the table |
tags | Tables, Columns | Arbitrary tags for categorisation (e.g. pii, financial, core-entity) |
classification | Columns | Data classification label (e.g. pii, sensitive, public) |
These fields make the schema YAML self-documenting for data team consumption. They are included in schema exports and used by schema contracts for dependency tracking.
Migration Engine
✅ Available in v0 (PostgreSQL) · 🚀 MySQL support planned for v1
Declarative State, Imperative Transitions
The schema YAML is declarative: it describes the desired end state. The migration engine computes the imperative transition from current state to desired state as an ordered sequence of DDL statements.
Example: making a column NOT NULL with a default
Schema change:
# Before
status:
type: "varchar(50)"
nullable: true
# After
status:
type: "varchar(50)"
nullable: false
default: "'active'"Generated migration plan:
-- Step 1: Set default for new rows
ALTER TABLE orders ALTER COLUMN status SET DEFAULT 'active';
-- Step 2: Backfill existing NULL values
UPDATE orders SET status = 'active' WHERE status IS NULL;
-- Step 3: Apply NOT NULL constraint
ALTER TABLE orders ALTER COLUMN status SET NOT NULL;The migration engine knows common transition patterns and generates the correct imperative sequence automatically.
Transition Patterns
| Transition | Generated Steps |
|---|---|
| Add nullable column | ALTER TABLE ADD COLUMN |
| Add NOT NULL column with default | ALTER TABLE ADD COLUMN ... DEFAULT ... NOT NULL |
| Make column NOT NULL (was nullable) | UPDATE SET default WHERE NULL → ALTER SET NOT NULL |
| Make column nullable (was NOT NULL) | ALTER DROP NOT NULL |
| Change column type (compatible) | ALTER COLUMN TYPE ... USING (PostgreSQL) / MODIFY COLUMN (MySQL) |
| Change column type (incompatible) | Warning: requires migration script |
| Add index | CREATE INDEX CONCURRENTLY (PostgreSQL) / CREATE INDEX (MySQL) |
| Drop index | DROP INDEX CONCURRENTLY (PostgreSQL) / DROP INDEX (MySQL) |
| Add foreign key | ALTER TABLE ADD CONSTRAINT ... FOREIGN KEY |
| Add table | CREATE TABLE |
| Drop table | Requires celerity.sqlDatabase.allowDestructive: true |
| Drop column | Requires celerity.sqlDatabase.allowDestructive: true |
| Rename column | Warning: requires migration script (ambiguous intent) |
Migration SQL
✅ Available in v0
Versioned SQL migration files for structural database operations that cannot be expressed in the schema YAML. These live in the directory specified by migrationsPath on the celerity/sqlDatabase resource. Each migration has separate up (apply) and down (rollback) files.
Migration scripts are for DDL and structural changes only, not for data backfills or data transformations. See Data Migrations for how data changes are handled.
File Naming Convention
V<number>__<description>.up.sql # Apply migration
V<number>__<description>.down.sql # Rollback migrationExamples:
V001__add_audit_trigger.up.sql/V001__add_audit_trigger.down.sqlV002__add_gin_index_on_details.up.sql/V002__add_gin_index_on_details.down.sqlV003__create_notify_function.up.sql/V003__create_notify_function.down.sql
The .down.sql file is optional but recommended; it enables clean rollback of the migration. If only the .up.sql file is present, the migration can be applied but not rolled back.
Use Cases
Migration scripts are for structural DDL that the schema YAML cannot express:
- Custom functions, stored procedures or triggers
- Specialised index types beyond what the schema YAML supports (e.g. partial indexes, expression indexes)
- Extension setup beyond simple
CREATE EXTENSION(e.g. extension-specific configuration) - Table partitioning
- Row-level security policies
- Custom types or domains
Execution Model
- Declarative schema DDL (generated from the schema YAML) runs first: both up (CREATE) and down (DROP) statements are generated from the schema definition
- Migration scripts run after schema DDL, in version order
- Each migration version is tracked in a
celerity_schema_migrationstable and runs only once, so re-runningdev runordeployskips already-applied versions - Scripts are embedded into the resource spec by the transformer, so changes to scripts are detected during deployment
Migration Tracking
Applied migrations are tracked in a celerity_schema_migrations table that is automatically created in each managed database:
| Column | Type | Description |
|---|---|---|
version | integer | Migration version number (from filename) |
description | text | Migration description (from filename) |
applied_at | timestamptz | When the migration was applied |
This table ensures migrations are idempotent. Running celerity dev run or celerity deploy multiple times will only apply migrations that haven't been applied yet. During rollback, version records are removed as their corresponding down scripts are executed.
Schema DDL: Up and Down
The schema YAML declarative definitions generate both up and down DDL statements:
- Up statements (create): Extensions → Tables → Foreign keys → Indexes → Check constraints
- Down statements (drop): Check constraints → Indexes → Foreign keys → Tables (reverse order) → Extensions (reverse order)
This means both the declarative schema and the migration scripts support full rollback capability.
Why Migration Scripts Run After Schema DDL
Schema DDL and migration scripts execute in two distinct phases: all schema DDL runs first, then all migration scripts run in version order. They are never interleaved. This is a deliberate design choice:
- The schema YAML is declarative end-state. There is no natural place to express "create this table, then run a script, then alter this column." Interleaving would require ordering hints in the YAML, which fights the declarative model.
- Each phase is independently rollable. Schema DDL has paired up/down statements; migrations have paired
.up.sql/.down.sqlfiles. Interleaving these would make rollback order ambiguous. If step 3 of 5 fails, which down scripts undo which up statements? - Simpler mental model. Developers reason about "what does my schema look like?" separately from "what custom operations need to run?" Mixing the two creates intermediate states that are harder to review and debug.
Data Migrations
Data backfills, data transformations and other DML operations tied to schema evolution are handled separately from structural migration scripts. This separation keeps the migrationsPath directory focused on structural DDL and avoids muddying the rollback semantics; a down script for a data backfill is often meaningless.
Automatic Data Transitions
The migration engine handles common data-aware transitions automatically as part of its diff computation. When the schema YAML changes in ways that require data operations, the engine generates the correct imperative sequence:
| Schema Change | What the Engine Does |
|---|---|
| Make column NOT NULL (was nullable, has default) | UPDATE SET default WHERE NULL → ALTER SET NOT NULL |
| Add NOT NULL column with default | ALTER TABLE ADD COLUMN ... DEFAULT ... NOT NULL |
| Change column type (compatible cast) | ALTER COLUMN TYPE ... USING (PostgreSQL) |
These transitions happen automatically. No migration script or manual intervention is needed.
Complex Data Migrations
For data migrations that the engine cannot handle automatically (column splits, complex data transforms, populating new foreign key columns from external sources), use a two-phase deploy approach with application-level tooling between deploys:
Example: splitting a name column into first_name and last_name
Deploy 1 — add new columns, keep old column:
# schemas/orders.yaml — add new columns alongside the old one
name:
type: "varchar(255)"
nullable: false
first_name:
type: "varchar(255)"
nullable: true # Nullable for now — will be populated
last_name:
type: "varchar(255)"
nullable: true # Nullable for now — will be populatedAfter deploy 1: the new columns exist but are empty. The old column is still in use.
Between deploys — run the data migration:
Use application-level tooling to perform the data transform. This could be a one-off script, a CLI command, a CI job, or a post-deploy hook:
-- Run via psql, a script, or a post-deploy hook
UPDATE customers
SET first_name = split_part(name, ' ', 1),
last_name = split_part(name, ' ', 2)
WHERE first_name IS NULL;This is not a migration script; it runs outside of the Celerity migration lifecycle, after the structural deploy succeeds and before the next deploy.
Deploy 2 — make new columns NOT NULL, drop old column:
# schemas/orders.yaml — finalize the split
first_name:
type: "varchar(255)"
nullable: false # Safe: all rows populated
last_name:
type: "varchar(255)"
nullable: false # Safe: all rows populated
# 'name' column removed (requires celerity.sqlDatabase.allowDestructive: true)After deploy 2: the old column is dropped, new columns are NOT NULL.
Why not put data migrations in migration scripts?
- Rollback semantics are unclear. What does a down script for
UPDATE customers SET first_name = split_part(name, ...)do? You can't meaningfully reverse a data transform. - Data migrations are environment-specific. Production might need batched updates with progress tracking; local dev doesn't need them at all (the database starts from scratch). Migration scripts run identically everywhere.
- Data migrations are transient. Once all environments have run the backfill, the script serves no purpose. Structural migrations (triggers, functions, indexes) are permanent parts of the schema.
- Separation enables flexibility. Application-level tooling can batch large updates, add progress logging, run dry-run modes, or integrate with monitoring, none of which fit in a SQL file.
Safety
- Destructive changes blocked by default: Dropping tables or columns requires the
celerity.sqlDatabase.allowDestructive: trueannotation on the resource - Migration scripts visible in diff: All pending scripts are shown in
celerity schema diffoutput for review - Changeset review: The deploy pipeline shows the full migration plan before execution
Deploy Pipeline Integration
✅ Available in v0
Schema management is integrated into the celerity deploy pipeline:
celerity deploy
│
├─ Phase 1: Infrastructure
│ Transformer converts celerity/sqlDatabase → Provider-specific database resources (e.g. AWS RDS)
│ Provider deploys database server infrastructure
│ Database verified as reachable
│
├─ Phase 2: Schema Migration
│ Connect to database directly
│ Generate imperative migration plan from diff
│ Display plan and prompt for confirmation
│ Execute migration (auto-generated DDL + migration scripts)
│ Write updated schema state
│
└─ Phase 3: Application
Deploy handler resources with database connection configuration
Handlers start against the migrated databaseContract validation in v0
In v0, contract validation is not built into the deploy pipeline. Use celerity schema validate in your CI pipeline to catch contract violations before deployment. See Schema Contracts for details.
Deploy-time contract enforcement (blocking deploys and dispatching webhook notifications automatically) is projected as a paid tier feature for a future release after v1 (post-May 2027).
Local Development
✅ Available in v0
Running Locally
When celerity dev run starts an application with SQL database resources:
- A local PostgreSQL or MySQL container is started based on
enginein the spec, only PostgreSQL is supported in v0 - The full schema DDL (up statements) is applied from the schema YAML: extensions, tables, foreign keys, indexes, and constraints are created
- Migration scripts (
.up.sqlfiles) are executed in version order, skipping any already tracked incelerity_schema_migrations - Connection environment variables are injected, pointing to the local container
- The runtime starts and handlers receive connection configuration
For persistent local development (opt-in): diff-based migration is used instead of full recreate, preserving data between restarts.
Testing
When celerity dev test runs an application with SQL database resources:
- An isolated database instance is created per test suite
- The schema is applied from scratch
- Test fixtures are loaded
- Tests run against the fully-schema'd database
- The database is torn down after tests complete
CLI Commands
✅ Available in v0
The celerity schema command group provides all the tools for working with database schemas. For full command reference including all flags and configuration options, see the CLI Reference: schema.
| Command | Description |
|---|---|
celerity schema diff | Show migration plan and contract impact without applying |
celerity schema apply | Apply migrations outside of a full deployment |
celerity schema validate | Validate schema files, foreign keys, migration SQL, contracts and optionally codegen freshness |
celerity schema export | Export schema as SQL DDL, markdown, JSON Schema or Mermaid ERD |
celerity schema codegen | Generate type-safe code from schema definitions |
celerity schema show | Show currently deployed schema for a given environment |
celerity schema history | Show schema change history for a given environment |
Example: celerity schema diff
The diff output displays the engine alongside the database name (e.g. postgres: orders or mysql: orders). The generated DDL uses engine-appropriate syntax.
$ celerity schema diff
ordersDb (postgres: orders):
Auto-generated:
[1] ALTER TABLE customers ADD COLUMN tier varchar(20) NOT NULL DEFAULT 'free';
[2] CREATE INDEX CONCURRENTLY idx_customers_tier ON customers (tier);
Migrations (pending):
[3] V003__add_status_audit_trigger.up.sql (new, not yet applied)
Contracts:
⛔ revenue-pipeline — orders table changed (blocking)
⚠ customer-analytics — customers table changed (notify)
Warnings:
- Column 'users.name' removed and 'users.first_name' + 'users.last_name' added.
This looks like a column split — use a two-phase deploy to migrate data.
See: https://celerityframework.io/docs/framework/applications/sql-database-schema-management#data-migrations
Auto-generated DDL will NOT drop 'users.name' until you confirm with:
celerity.sqlDatabase.allowDestructive: true
Apply with: celerity schema applyType Generation
✅ Available in v0 (TypeScript, Python)
Generate types from the schema YAML: TypeScript interfaces or Python Pydantic models representing table rows, plus column name constants. No ORM coupling. Combine with whatever library you prefer.
For SDK usage examples with generated types, see the Node.js SDK - SQL Database and Python SDK - SQL Database documentation.
TypeScript
celerity schema codegen --lang typescript --out ./src/generated/Generated output:
// generated/orders-db.ts — auto-generated, do not edit
/** customers table row */
export type CustomersRow {
id: string;
email: string;
name: string;
tier: string;
created_at: Date;
}
/** orders table row */
export type OrdersRow {
id: string;
customer_id: string;
status: string;
total_cents: number;
line_items: unknown;
created_at: Date;
updated_at: Date;
}
export const Tables = {
customers: "customers",
orders: "orders",
order_events: "order_events",
} as const;
export const OrdersColumns = {
id: "id",
customer_id: "customer_id",
status: "status",
total_cents: "total_cents",
line_items: "line_items",
created_at: "created_at",
updated_at: "updated_at",
} as const;Python
celerity schema codegen --lang python --out ./src/generated/Generated output:
# generated/orders_db.py — auto-generated, do not edit
from pydantic import BaseModel
from datetime import datetime
from typing import Any
class CustomersRow(BaseModel):
id: str
email: str
name: str
tier: str
created_at: datetime
class OrdersRow(BaseModel):
id: str
customer_id: str
status: str
total_cents: int
line_items: Any
created_at: datetime
updated_at: datetimeGo
🚀 Planned for v1 - Go type generation is planned for a future release.
Java
🚀 Planned for v1 - Java type generation (record classes) is planned for a future release.
C#
🚀 Planned for v1 - C# type generation (record types) is planned for a future release.
Schema Contracts
Schema contracts allow data teams to declare which tables they depend on, so they are automatically informed when schema changes affect them. Instead of manually tracking column-level dependencies, contracts operate at the table level: any structural change to a watched table triggers the contract's policy.
The migration engine already knows exactly what changed (columns added, dropped, type changes, etc.), so contracts don't need to duplicate that information. They simply declare: "I care about these tables. Tell me when they change."
Contracts File Format
✅ Available in v0
Data teams maintain a contracts file in the repository alongside the blueprint:
# schema-contracts.yaml
contracts:
- name: "revenue-pipeline"
owner: "data-team"
dependencies:
- database: "ordersDb"
tables: ["orders"]
policy: "blocking" # non-zero exit code if these tables change
- name: "customer-analytics"
owner: "data-team"
dependencies:
- database: "ordersDb"
tables: ["customers", "orders"]
policy: "notify" # warning output, zero exit codeContracts can also include NoSQL datastore dependencies in the same file. Use the datastore key instead of database; no tables sub-key is needed since a datastore represents a single table:
# schema-contracts.yaml — combined SQL and NoSQL contracts
contracts:
- name: "revenue-pipeline"
owner: "data-team"
dependencies:
- database: "ordersDb"
tables: ["orders"]
policy: "blocking"
- datastore: "userStore"
policy: "notify"
- name: "user-analytics"
owner: "data-team"
dependencies:
- datastore: "userStore"
policy: "blocking"See NoSQL Datastore Schema Contracts for more details on NoSQL contract format.
| Field | Description |
|---|---|
name | Human-readable contract name |
owner | Team or individual that owns the downstream dependency |
dependencies[].database | Name of the celerity/sqlDatabase resource in the blueprint (for SQL databases) |
dependencies[].tables | Tables this contract watches for changes (SQL databases only) |
dependencies[].datastore | Name of the celerity/datastore resource in the blueprint (for NoSQL datastores) |
dependencies[].policy | blocking (non-zero exit code) or notify (warning only, zero exit code) |
When a schema change touches any table listed in a contract, the diff output includes the full details of what changed: columns added, dropped, renamed, type changes, etc. The contract itself doesn't need to enumerate these; it just identifies which tables matter.
Validation and CI Integration
✅ Available in v0
Contract checking is built into celerity schema validate. When a schema-contracts.yaml file exists and deployed state is available, validate evaluates contracts as part of its checks:
blockingcontracts affected:validateexits with a non-zero exit code, failing your CI pipelinenotifycontracts affected:validateprints warnings but does not affect the exit code
This means a single celerity schema validate step in CI covers everything: schema correctness, foreign key integrity, migration SQL parsing, and contract impact:
# Example CI step (GitHub Actions)
- name: Validate schema
run: celerity schema validateWhen a blocking contract fires, the CI output shows exactly what changed, giving the data team the information they need to review the PR. Use your platform's existing notification mechanisms (CODEOWNERS, required reviewers, Slack integrations on CI failure) to alert the right people.
Contract impact is also shown in celerity schema diff output, giving visibility into downstream effects during local development.
Deploy-Time Enforcement and Webhook Notifications
Future Capability
In v0, contract validation runs via celerity schema validate as a CI gate; it does not run automatically during celerity deploy.
Deploy-time contract enforcement (automatically blocking deploys when contracts are affected) and webhook notifications (dispatching to Slack, email or custom endpoints when contracts fire) are projected as paid tier features for a future release after v1 (post-May 2027). These are future projections, not committed features.
The paid Schema Service would add a notify field to contracts for webhook configuration, integrate contract checks directly into the deploy pipeline, and dispatch notifications automatically.
Programmatic Schema API
Future Capability
A programmatic REST API for querying deployed schemas, change history and contract status is projected as a paid tier feature for a future release after v1 (post-May 2027).
Planned endpoints include:
GET /schemas/{instanceId}/{resourceName}— current deployed schemaGET /schemas/{instanceId}/{resourceName}/history— change historyGET /schemas/{instanceId}/{resourceName}/contracts— contract status
Pipeline tools would integrate with this API to auto-generate configs, sync data catalogs and trigger downstream updates.
Data Catalog Integrations
Future Capability
Auto-sync to data catalog services (AWS Glue Data Catalog, DataHub, Atlan, etc.) is projected as a paid tier feature for a future release after v1 (post-May 2027).
For Data Teams
The schema YAML files serve as always-accurate documentation because they are what actually drives the database. Data teams benefit from:
description,owner,tags,classificationfields on every table and column make schemas self-documenting- Schema exports (
celerity schema export --format markdown) generate human-readable documentation - Git history of schema files (
git log schemas/orders.yaml) provides full change history with PR review - Contract definitions in
schema-contracts.yamllet data teams declare and protect their dependencies - Machine-readable exports (
celerity schema export --format json-schema) integrate with pipeline tools - ERD diagrams (
celerity schema export --format mermaid) visualise table relationships
Last updated on