Open format · v0.1
A file that lives at the root of your project that expresses what production means for your app. AI tools and platforms read it and deploy accordingly. Your intent, in writing.
--- production: runtime: engine: node version: "22" command: node server.js build: command: npm run build output: dist env: required: - DATABASE_URL - STRIPE_SECRET_KEY health: path: /healthz scale: min: 1 max: 10 memory: 512mb --- # Your deployment notes here
The problem
Tools like Lovable, Replit, Bolt, and v0 have made it possible for anyone to ship software. But shipping means accepting the platform's mental model: its runtime, its scaling rules, its secrets, its networking. Move platforms and you start over.
deploy.checks and health put the rollback policy in the repo, not the platform dashboard. Platforms that honour it are covered when things go wrong. Platforms that ignore it are exposed.
A new engineer or AI agent reads production.md and knows what production looks like. No dashboard access required, no tribal knowledge, no Slack archaeology.
The prose section gives AI deploy agents the why behind the config. "The worker must not scale below 1 — the queue will back up." No structured field can carry that.
Specification · v0.1
All fields live under a top-level production: key in YAML front matter.
All fields are optional. They are intent declarations, not hard contracts —
each platform implements them according to its own capabilities. Omitting a field means "use the platform default."
runtime: engine: node # node | bun | deno | python | go | static version: "22" command: bun start
build: command: bun run build output: dist cache: - node_modules - .next/cache
env: required: - DATABASE_URL - STRIPE_SECRET_KEY optional: - LOG_LEVEL public: - NEXT_PUBLIC_API_URL
regions: primary: eu-west-1 replicas: - us-east-1 edge: true
scale: min: 1 max: 10 concurrency: 100 timeout: 30s memory: 512mb
health: path: /health interval: 10s timeout: 5s unhealthy_threshold: 3
domains: - host: myapp.com redirect_www: true - host: staging.myapp.com env: staging
persistence: databases: - id: main engine: postgres migrations: bun run db:migrate cache: - id: sessions type: redis
jobs: - id: send-digests command: bun run jobs/digest.ts schedule: "0 9 * * 1" - id: process-queue command: bun run worker trigger: queue
observability: logs: level: info format: json metrics: true traces: true errors: provider: sentry
deploy: strategy: blue-green # rolling | blue-green | canary preview: true auto_promote: false checks: - bun test - bun run typecheck
Full example
A complete file for a Next.js app with a Postgres database, Redis cache, and a background worker.
--- production: runtime: engine: bun version: "1" command: bun start build: command: bun run build output: .next cache: - node_modules - .next/cache env: required: - DATABASE_URL - RESEND_API_KEY optional: - LOG_LEVEL public: - NEXT_PUBLIC_APP_URL regions: primary: eu-west-1 edge: true scale: min: 1 max: 20 concurrency: 50 timeout: 30s memory: 1gb persistence: databases: - id: main engine: postgres version: "16" migrations: bun run db:migrate cache: - id: sessions type: redis jobs: - id: process-queue command: bun run worker trigger: queue health: path: /api/health interval: 15s deploy: strategy: blue-green preview: true auto_promote: false checks: - bun test - bun run typecheck observability: logs: level: info format: json traces: true --- # Production notes This app serves EU customers. Keep primary region in eu-west-1. The worker must not be scaled below 1 — the queue will back up. Blue-green deploys required; we cannot afford cold-start latency on rollout.
For platforms
Fields express user intent. Platforms implement them according to their own capabilities. Exact behaviour will differ; that is expected and fine. Here is why it is worth doing.
A platform that reads production.md can configure a deployment from the file on first push. Zero-config deploys for users who have it. A concrete DX win you can ship as a feature.
When deploy.checks and health are declared, rollbacks become the execution of stated user intent. Platforms are protected; the policy is in writing and in the repo.
AI-assisted deploy agents can read the prose section for the reasoning behind config values. The structured fields tell the platform what; the prose tells it why.
Implementation conventions
Parse the YAML front matter before starting any deploy pipeline. Treat fields as intent — where a field cannot be honoured, fall back gracefully and surface the reason in deploy logs.
Use sensible defaults for any omitted field. Never fail because a field is absent.
Warn, never silently ignore, when a declared field is not supported. Surface it in deploy logs.
Extend via namespaced keys rather than adding top-level fields. Use platform: { vercel: { ... } } for proprietary extensions.
Defer to Dockerfile when one is present. production.md is not a container spec; treat runtime as advisory when a Dockerfile exists.
Read the prose section. The freeform text beneath the front matter carries intent that structured fields cannot express. AI-assisted deploy pipelines should use it as context when making decisions.
Publish your supported fields. Document which fields you implement and how, so users know what to expect on your platform.
Open source
PROD.md is an open specification. Anyone can use it, implement it, and contribute improvements. The spec stays portable by design: no platform lock-in, no proprietary gatekeeping.
The specification text and examples are released under CC0 1.0.
Propose field updates, semantics, and implementation guidance in GitHub discussions and pull requests.
Maintained with community input, including prodwise.com.