AI-Powered Content Pipeline v2
Status: ADR-010 — Documentation, Release Notes, Changelogs & Content Strategy Updated: February 2026Vision
k0rdent publishes four categories of content — API docs, changelog/release notes, product guides, and blog posts — using AI to draft, humans to review, and automated quality gates to enforce standards. Everything ships publicly as part of our build-in-public strategy to establish market leadership in AI infrastructure. Non-negotiable: A human reviews every piece of published content. AI accelerates — it doesn’t replace editorial judgment.- Vision
- Four Content Domains
- Building Blocks
- Tooling Stack
- Demo Script Workflow
- Publishing Cadence
- Launch Weeks
- “AI Building AI” Content Strategy
- Quality Standards
- Implementation Roadmap
- Open Questions
- Summary
Four Content Domains
The pipeline targets four content domains today, with more planned as the system matures. Each has its own trigger, source material, AI workflow, and human review requirements.| Domain | Key Output | Auto-merge? | |
|---|---|---|---|
| 1 | API Reference Documentation | Endpoint docs, code examples, migration guides | Additive changes only |
| 2 | Changelog & Release Notes | Technical changelog + customer release notes + Slack post | Never |
| 3 | Product Documentation & Guides | Feature walkthroughs, getting started guides, concept explainers | Never |
| 4 | Blog Posts & Public Communications | Draft posts, announcement posts, technical deep-dives | Never |
1. API Reference Documentation
Our Hono + Zod stack generates OpenAPI 3.1 specs. The pipeline lints, validates, enriches, and publishes automatically. AI produces:- Human-readable endpoint descriptions
- Multi-language code examples (cURL, TypeScript, Python)
- Common usage patterns
- Migration guides for breaking changes
- OpenAPI spec (auto-generated)
- Spec diffs between versions
- Route handler source code
- Existing published docs
Pipeline
2. Changelog & Release Notes
Two tiers from the same sources: a developer-facing technical changelog (Keep a Changelog format) and a customer-facing product release notes post (benefit-oriented) plus a Slack announcement. AI produces:- Structured changelog entries
- Narrative release notes organized by Atlas/Arc audience
- Concise Slack posts
- Changesets (
.changeset/*.mdfiles) - Conventional commits
- Merged PRs + labels
- Sprint demo scripts
- Optional user-provided highlights
Pipeline
3. Product Documentation & Guides
Highest editorial attention of any domain. AI drafts from internal material; humans shape the voice and structure. AI produces:- Feature walkthroughs (from demo scripts)
- Getting started guides
- Troubleshooting docs
- Architecture overviews
- Concept explainers
- Sprint demo scripts
- Internal design docs and ADRs
- System requirements
- API specs
- Meeting transcripts
- User-provided notes
Pipeline
4. Blog Posts & Public Communications
The most creative domain. AI scaffolds the structure and initial draft; the author provides voice, narrative, and editorial judgment. AI produces:- Draft posts from outlines
- “Building in public” updates
- Technical deep-dives
- Announcement posts
- “How we use AI to build AI” content
- Sprint work
- ADRs and engineering decisions
- Industry context
- Author outlines
- Meeting transcripts
Pipeline
Demo Script Pipeline
Building Blocks
Complex pipelines are built from simple, composable pieces. Each block does one thing and can be developed, tested, and used independently.Sources (Input Blocks)
These extract and normalize content from wherever it lives. Each produces a standardSourceDocument that any pipeline can consume.
| Block | What It Does | Input | Output |
|---|---|---|---|
| changeset-reader | Reads .changeset/*.md files | .changeset/ directory | Structured changes with semver + summary |
| commit-parser | Parses conventional commits | Git log range | Grouped changes by type/scope |
| pr-collector | Fetches merged PRs from GitHub | Repo + date range | PR titles, descriptions, labels, linked issues |
| spec-exporter | Exports OpenAPI spec from Hono routes | App build | openapi.json |
| spec-differ | Diffs two OpenAPI specs | Old spec + new spec | Breaking/additive/deprecation changes |
| demo-script-parser | Extracts features from demo scripts | Demo script markdown | Structured feature sections + narratives |
| transcript-processor | Extracts decisions from meeting transcripts | Raw transcript text | Decisions, action items, key quotes |
| doc-indexer | Reads existing published docs | Mintlify content dir | Indexed doc sections for context |
| issue-collector | Pulls sprint issues from GitHub Project | Project board ID | Categorized issues for outline generation |
| user-input | Accepts freeform notes or context | Text from author | Prioritized source material |
Generators (Transform Blocks)
These take source documents + prompt templates and produce draft content via AI.| Block | What It Does | Model | Sources It Consumes |
|---|---|---|---|
| changelog-generator | Produces Keep a Changelog entries | Haiku | changesets, commits, PRs |
| release-notes-generator | Produces benefit-oriented narrative | Sonnet | changesets, commits, PRs, demo script, user input |
| slack-announcement-generator | Produces concise Slack post | Haiku | release notes output |
| api-enricher | Adds descriptions + examples to endpoints | Sonnet | spec, spec-diff, route handler code |
| migration-guide-generator | Documents breaking changes | Sonnet | spec-diff, existing docs |
| feature-guide-generator | Transforms demo narration into user guide | Sonnet | demo script, API spec, related docs |
| blog-drafter | Produces structured blog post draft | Opus/Sonnet | varies by topic type |
| demo-outline-generator | Creates sprint demo outline from issues | Sonnet | sprint issues, previous demo scripts |
| demo-script-generator | Produces timed demo script from outline | Sonnet | outline, previous scripts as examples |
Quality Gates (Validation Blocks)
These check generated content before it reaches a human.| Block | What It Checks | Tool |
|---|---|---|
| spectral-lint | OpenAPI spec validity + style rules | Spectral |
| vale-lint | Prose quality, terminology, banned words | Vale |
| mdx-validate | Valid MDX syntax, frontmatter complete | remark/rehype |
| link-check | No broken internal or external links | markdown-link-check |
| jargon-filter | No internal tech names in customer content | Custom rules |
| spec-conformance | Code examples match actual API schema | Custom validator |
| claim-grounding | Release note claims map to real commits | Custom checker |
| heading-hierarchy | No skipped heading levels | markdownlint |
| ai-slop-detector | Catches overused AI phrases | Vale custom rules |
Orchestration (Workflow Blocks)
These compose sources, generators, and quality gates into end-to-end pipelines.| Block | What It Orchestrates | Trigger |
|---|---|---|
| api-docs-pipeline | spec-exporter → spectral-lint → spec-differ → api-enricher → quality gates → PR | PR merge with API changes |
| release-pipeline | changeset-reader + commit-parser + pr-collector + demo-script-parser → changelog-generator + release-notes-generator + slack-announcement-generator → quality gates → PR | Sprint end / release tag |
| guide-pipeline | demo-script-parser + user-input → feature-guide-generator → quality gates → PR | Manual trigger or feature ship |
| blog-pipeline | Various sources → blog-drafter → quality gates → PR | Manual trigger |
| demo-pipeline | issue-collector → demo-outline-generator → (human edit) → demo-script-generator → (human edit) → final script | Sprint approaching end |
How Blocks Compose
Each block is a function or script that can run independently:Tooling Stack
What We Have
| Tool | Role | Status |
|---|---|---|
| Mintlify | Docs hosting, blog, API reference, AI search, llms.txt | ADR-010 selected |
| GitHub Actions | CI/CD triggers, quality gates | In use |
| Anthropic Claude API | Primary AI generation | Available |
| OpenAI API | Alternative generation, workflows, assistants | Available |
| Cursor | Interactive drafting, iteration | In use daily |
| Hono + Zod | OpenAPI 3.1 spec generation | In use |
| Conventional commits | Structured commit messages | Enforced |
| Changesets | Monorepo versioning + changelog | Switching to |
| GitHub Projects | Sprint tracking, issue management | In use |
| Trigger.dev | Durable workflow orchestration | In use |
What We Add
| Tool | Role | Why |
|---|---|---|
| Spectral | OpenAPI linting + style enforcement | Industry standard, custom rulesets, VS Code extension for real-time linting, OWASP security rules |
| Vale | Prose linting with custom k0rdent rules | Same concept as Spectral but for written content |
| oasdiff | OpenAPI spec diffing | Detects breaking vs additive changes between versions |
OpenAPI Quality with Spectral
Spectral lints and validates our OpenAPI spec against both the standard and our custom API style rules. This runs in CI on every PR that touches API routes and locally via VS Code extension.Orchestration Options
We’re not limited to GitHub Actions. Pipelines can run as:| Orchestrator | Best For | How |
|---|---|---|
| GitHub Actions | CI/CD triggers (PR merge, tag push) | Workflow YAML files |
| Trigger.dev tasks | Long-running generation, durable workflows | TypeScript task definitions |
| CLI scripts | Local development, manual runs, iteration | pnpm pipeline ... commands |
| Claude tool use | Interactive drafting in Cursor or Claude Desktop | MCP tools or direct API |
| OpenAI Assistants/Workflows | Alternative generation, specialized agents | Assistants API with file search |
| Custom services | Scheduled jobs, webhook handlers, Slack integrations | Hono API routes or standalone |
AI Model Selection
| Task | Model | Rationale |
|---|---|---|
| Technical changelog | Claude Haiku | Fast structured transformation |
| Slack announcements | Claude Haiku | Lightweight summarization |
| Release notes | Claude Sonnet | Speed + narrative quality |
| API doc enrichment | Claude Sonnet | Code comprehension |
| Feature guides | Claude Sonnet | Structured long-form |
| Architecture docs | Claude Opus | Complex synthesis |
| Blog post drafts | Claude Opus or Sonnet | Depends on depth |
| Demo outline from issues | Claude Sonnet | Understanding sprint context |
| Demo script generation | Claude Sonnet | Follows structured template |
| Quality evaluation | Claude Haiku | Fast pass/fail checks |
| Transcript preprocessing | Claude Haiku | Extract structure from noise |
| Alternative voice | OpenAI GPT-4o | Variety, second opinion |
Demo Script Workflow
The demo script has its own pipeline because it feeds everything else. Your actual workflow, enhanced with AI at each step:Step 1: Generate Outline
Manual path: Author writes short bullet points of what shipped and what to demo. AI-assisted path: AI pulls issues from the GitHub Project board for the current sprint, categorizes by feature area, and produces a structured outline. Author cleans up, reorders, adds emphasis.Step 2: Generate Script
AI takes the outline + past sprint demo scripts as few-shot examples and generates a timed script following the template:- Opening (25-30 sec): Context, what this sprint was about
- Primary features (60-90 sec each): Screen directions, narration, key moments
- Secondary features (20-30 sec each): Quick hits
- What’s next (30-45 sec): Next sprint priorities
- Closing (20-25 sec): Recap, reinforcement of core talking points
Step 3: Author Edit Pass (Iterative)
This is where the human judgment matters most:- Intro: Tune the opening message — what are the core talking points that frame everything? What narrative thread connects the demos?
- Feature sections: For each demo section, add/remove/update content. Ensure screen directions match actual UI. Adjust narration to sound natural when spoken.
- Conclusion: Quick recap that reinforces the original talking points. Brief look at what’s next. Thank you or closing messaging.
Step 4: Production Notes + Record
Final script includes a pre-recording checklist, key moments to nail, and recording tips. Author records the demo video.Step 5: Script Feeds Other Pipelines
The finished demo script becomes high-quality source material for:- Release notes pipeline: Demo narration is already benefit-oriented language
- Feature guide pipeline: “Here’s what I’m showing you” → “Here’s how you do it”
- Blog post pipeline: Sprint summary or feature deep-dive
Publishing Cadence
Internal (Team + Stakeholders)
| Content | Cadence | Channel | Audience |
|---|---|---|---|
| Sprint demo video | Bi-weekly (sprint end) | Slack + stakeholder meeting | Team, leadership, partners |
| Technical changelog | Every release | CHANGELOG.md in repo | Engineering team |
| Slack announcement | Bi-weekly | #k0rdent-updates channel | Broader org |
| Architecture decisions | As made | ADR in repo + Slack thread | Engineering team |
| Sprint retro notes | Bi-weekly | Internal docs | Team |
Public (Docs Site + Blog + Community)
| Content | Cadence | Channel | Audience |
|---|---|---|---|
| API reference | Continuous (on every API change) | docs.k0rdent.com/api | Developers |
| Product docs/guides | Sprint-aligned updates | docs.k0rdent.com/docs | Operators, customers |
| Product release notes | Bi-weekly | docs.k0rdent.com/blog | Everyone |
| Technical deep-dive | Monthly | docs.k0rdent.com/blog | Engineers, evaluators |
| ”Building in public” update | Milestone-driven | Blog + social | Industry, community |
| ”AI building AI” post | Monthly | Blog + social + dev communities | Developers, AI practitioners |
Internal → Public Flow
Not everything internal becomes public. Internal content is source material. The pipeline transforms it, removes internal details, and produces customer-appropriate output. An ADR about choosing BetterAuth becomes a blog post about “why we chose stateful sessions for enterprise security.” A sprint demo script about RLS policies becomes a feature guide about “how data isolation protects your organization.”Launch Weeks
Launch weeks are a proven strategy for concentrated market attention. Supabase (15 launch weeks and counting), Resend, Vercel, and Cloudflare have all used them to establish category leadership. k0rdent should do the same.Key Takeaways for k0rdent
From studying these examples:- Features ship early, not on launch day. Build behind feature flags, test with early users, flip the flag on announcement day. This is already how we work with feature flags.
- Visual features get the most attention. Prioritize announcements that can be shown, not just described. Atlas and Arc UIs are inherently visual — lean into this.
- Daily content plan per announcement. Each day needs: blog post, social thread (self-contained, not just a link), demo video, and optionally an email to waitlist.
- Main Stage + Build Stage. Not everything is a headline announcement. Bundle smaller improvements into a “Build Stage” or “bonus announcements” track alongside the main features.
- Art direction matters. A cohesive visual theme across the week makes it feel like an event, not just a sequence of blog posts. Resend commissions custom illustrations. Supabase creates themed micro-sites.
- List on launchweek.dev. Free visibility in the dev tools community. Submit our launch week when ready.
- Run a retrospective. Track impressions, signups, doc traffic, and GitHub activity. Compare to baseline. Feed learnings into the next launch week.
What a Launch Week Looks Like
One week. One major announcement per day (Main Stage). Smaller surprise releases throughout (Build Stage). Builds momentum across the week with each day’s announcement reinforcing a narrative. Features are shipped behind flags and tested beforehand — launch day is the announcement, not the deploy. Example: k0rdent Launch Week 1 — “From Rack to AI” Main Stage (one per day, with blog + demo video + social thread):| Day | Announcement | Content |
|---|---|---|
| Monday | ”k0rdent is building in public” | Blog: vision, what we’re building, why. Public docs site goes live. |
| Tuesday | ”Atlas: The operator console” | Blog: deep-dive on Atlas capabilities. Demo video. Feature guides. |
| Wednesday | ”Arc: Self-service AI infrastructure” | Blog: customer experience. Demo video. Getting started guide. |
| Thursday | ”API-first: build on k0rdent” | Blog: API philosophy. Interactive API reference launch. Code examples. |
| Friday | ”How we use AI to build AI infrastructure” | Blog: our AI tooling, pipeline, process. Open source what we can. |
- New Spectral-powered API linting rules (open sourced)
- CLI improvements and developer experience updates
- Documentation search powered by AI
- Community contribution guidelines
Launch Week Planning
Launch Week Cadence
- Launch Week 1: Public docs launch + initial product showcase
- Launch Week 2: ~3 months later. Major feature milestone (e.g., Stacks MVP, one-click deployments)
- Ongoing: Quarterly consideration for launch weeks when there’s enough to announce
Post-Launch Week
Following the Supabase “Top 10” format, publish a wrap-up post after launch week:- Recap all announcements (main stage + build stage)
- Highlight community reception and metrics
- Thank contributors and early testers
- Tease what’s next
- This also serves as a single shareable link for anyone who missed the week
References
Study these before planning our first launch week:- Resend: Launch Week Behind the Scenes — The best tactical breakdown. Covers product prioritization (large-impact, visual, requestable features), art direction, feature flags for soft-launching before the public reveal, daily content plan (blog + social + video + email), waitlist with double opt-in, and post-launch week metrics tracking. Key insight: they build features behind flags weeks before launch, get real user feedback, then flip the flag on launch day with confidence. Resend saw a 45% increase in impressions over their previous launch week with this approach. They also run a retro afterwards to improve the next one.
- launchweek.dev — Community-maintained directory tracking every dev tool launch week in the industry. Useful for timing (avoid colliding with bigger launches), format inspiration, and understanding the landscape. Notably powered by Mintlify. Key quotes: “Launch weeks have been great for both aligning the team and getting traction within the community” (Rory Wilding, Supabase COO). Companies of all sizes participate — from solo makers to 100-person teams. We should list ours here when the time comes.
- Supabase Launch Week — The gold standard format. Supabase pioneered this and is on Launch Week 15. Their structure: “Main Stage” (5 major announcements, one per day) plus “Build Stage” (surprise smaller releases throughout the week) plus a community hackathon plus worldwide community meetups. Read Ant Wilson’s (CTO) advice: ship features to prod a week early, not on launch day itself.
- Supabase Launch Week 15: Top 10 — Wrap-up post format showing how to recap and amplify a launch week after it ends. Good template for our own post-launch-week content.
”AI Building AI” Content Strategy
This is a unique angle. We’re using AI tools to build AI infrastructure products. Sharing how we do this builds credibility, attracts talent, and positions k0rdent as a thought leader.Content Ideas
| Topic | Format | Source Material |
|---|---|---|
| How we use Claude to generate sprint demo scripts | Blog post | This workflow, demo script skill, before/after examples |
| Our AI-powered docs pipeline | Blog post + open source | Pipeline architecture, prompt library, quality results |
| Cursor + Claude for enterprise UI development | Blog post | Daily workflow, .cursorrules, AGENTS.md approach |
| How we evaluate AI-generated content quality | Blog post | Quality framework, Vale rules, benchmark methodology |
| Building OpenAPI-first with AI assistance | Blog post | Hono/Zod → spec → AI enrichment → published docs |
| AI code review: what works and what doesn’t | Blog post | ADR-005 decisions, tool evaluations |
| Designing for AI: how we structure code for AI comprehension | Blog post | Monorepo patterns, type safety, naming conventions |
| Our prompt engineering for technical writing | Blog post | Prompt library, golden examples, iteration process |
Monthly Cadence
One “AI building AI” post per month. Rotate through topics. Always include concrete examples and honest assessments — what worked, what didn’t, what we’d do differently.Quality Standards
OpenAPI Quality (Spectral)
| Rule Category | Examples | Severity |
|---|---|---|
| Structural validity | Valid OpenAPI 3.1 syntax | Error |
| Operation descriptions | Every endpoint has a description | Error |
| Parameter descriptions | Every parameter documented | Warning |
| Response envelope | Uses { success, data, meta } pattern | Error |
| Field naming | camelCase for all properties | Error |
| Security | OWASP ruleset compliance | Warning |
| Examples | Response examples present and valid | Warning |
| k0rdent conventions | Prefixed IDs, ISO timestamps | Warning |
Prose Quality (Vale)
| Rule | Examples | Applies To |
|---|---|---|
| Terminology | ”bare metal server” not “baremetal” | All |
| Banned internal tech | No Drizzle, Trigger.dev, RLS, BetterAuth, Hono, Zod | Customer-facing |
| Minimizers | No “simply”, “just”, “easy” | All |
| AI slop | No “dive into”, “realm”, “landscape”, “robust” | All |
| Active voice | Required in procedural docs | Guides |
| Sentence length | Flag sentences > 30 words | All |
Content Quality Gates
| Check | Tool | Severity | Auto-merge eligible? |
|---|---|---|---|
| Valid MDX syntax | remark/rehype | Error | Blocks merge |
| Frontmatter complete | Custom | Error | Blocks merge |
| No broken links | markdown-link-check | Error | Blocks merge |
| OpenAPI spec valid | Spectral | Error | Blocks merge |
| Prose quality | Vale | Warning | Flags for review |
| Terminology consistency | Vale | Warning | Flags for review |
| Code examples parse | Custom | Error | Blocks merge |
| API spec conformance | Custom | Error | Blocks merge |
| Claim grounding | Custom | Warning | Flags for review |
Human Review Requirements
| Content Type | Auto-merge if QC passes? | Required Reviewers |
|---|---|---|
| API reference (additive) | Yes | None |
| API reference (breaking) | No | Engineering lead |
| Technical changelog | Yes (after calibration) | None |
| Product release notes | No | Product + Engineering lead |
| Feature guides (new) | No | Product + Engineering |
| Feature guides (update) | No | Engineering lead |
| Blog posts | No | Author + Editorial |
| Demo scripts | No | Author (always hand-edited) |
Implementation Roadmap
Phase 1: Foundation (Weeks 1-3)
Goal: API docs linted and published, style infrastructure in place.- Set up Mintlify with k0rdent branding and navigation structure
- Configure Mintlify OpenAPI integration with Hono-generated spec
- Set up Spectral with k0rdent custom ruleset (API linting)
- Set up Vale with k0rdent custom rules (prose linting)
- Write style guide and glossary manually (foundation for all AI generation)
- Set up changesets in monorepo (
@changesets/cli+ config) - CI: Spectral lint on every PR touching API routes
- CI: Export OpenAPI spec on merge → update Mintlify
- Write 1 golden example per content type (by hand)
Phase 2: Changelog & Release Notes (Weeks 4-6)
Goal: Every sprint produces draft release notes automatically.- Integrate changesets with release workflow
- Build release notes generator (changesets + commits + PRs + demo script → Claude → narrative)
- Build Slack announcement generator
- Quality gate pipeline for generated content
- Run for Sprint 5 as first real test
- Iterate on prompt quality based on human edit rate
Phase 3: Product Docs & Guides (Weeks 7-10)
Goal: Sprint demo scripts produce feature guide drafts.- Build demo script → feature guide generator
- Build doc gap scanner (compare published docs vs shipped features)
- AI enrichment for API docs (examples, patterns, migration guides)
- Interactive mode: iterate on drafts in Cursor
- Tune demo script Claude Skill based on actual usage
- Build demo outline generator from GitHub Project issues
Phase 4: Blog + Launch Week Prep (Weeks 11-14)
Goal: Regular publishing cadence, launch week content ready.- Blog post pipeline (outline + sources → AI draft → author edit)
- Write first “AI building AI” blog post
- Set up Mintlify analytics
- Plan Launch Week 1 theme and daily topics
- Create launch week content backlog
- Establish bi-weekly publishing cadence
- Configure RSS/Atom feed for changelog and blog
Open Questions
- Mintlify vs Fumadocs: Stay with Mintlify (managed, AI-native) or self-host with Fumadocs (full control, Next.js native)? Can decide after Phase 1 experience.
- Changesets + conventional commits: Both can coexist. Changesets for versioning and structured changelog. Conventional commits for commit hygiene and as additional signal for AI generation. Confirm this is the intended setup.
- Demo script skill quality: Test the existing Claude Desktop skill. Key inputs: outline, past scripts as examples, timing constraints, voice guide. Iterate based on output quality.
- Launch Week 1 timing: When is there enough to announce across 4-5 days? Likely after Stacks MVP + public docs + Arc onboarding are ready.
- Open source the pipeline? The docs pipeline itself could be open sourced as a “how we do it” artifact. Good for credibility but adds maintenance burden. Decide after it’s battle-tested.
- API playground: Interactive try-it-out in API reference? Mintlify supports this natively. Requires auth integration.