Skip to main content

AI-Powered Content Pipeline v2

Status: ADR-010 — Documentation, Release Notes, Changelogs & Content Strategy Updated: February 2026

Vision

k0rdent publishes four categories of content — API docs, changelog/release notes, product guides, and blog posts — using AI to draft, humans to review, and automated quality gates to enforce standards. Everything ships publicly as part of our build-in-public strategy to establish market leadership in AI infrastructure. Non-negotiable: A human reviews every piece of published content. AI accelerates — it doesn’t replace editorial judgment.

Four Content Domains

The pipeline targets four content domains today, with more planned as the system matures. Each has its own trigger, source material, AI workflow, and human review requirements.
DomainKey OutputAuto-merge?
1API Reference DocumentationEndpoint docs, code examples, migration guidesAdditive changes only
2Changelog & Release NotesTechnical changelog + customer release notes + Slack postNever
3Product Documentation & GuidesFeature walkthroughs, getting started guides, concept explainersNever
4Blog Posts & Public CommunicationsDraft posts, announcement posts, technical deep-divesNever

1. API Reference Documentation

Our Hono + Zod stack generates OpenAPI 3.1 specs. The pipeline lints, validates, enriches, and publishes automatically. AI produces:
  • Human-readable endpoint descriptions
  • Multi-language code examples (cURL, TypeScript, Python)
  • Common usage patterns
  • Migration guides for breaking changes
Source material:
  • OpenAPI spec (auto-generated)
  • Spec diffs between versions
  • Route handler source code
  • Existing published docs
Human review: Additive changes can auto-merge if quality gates pass. Breaking changes always require manual review.

Pipeline


2. Changelog & Release Notes

Two tiers from the same sources: a developer-facing technical changelog (Keep a Changelog format) and a customer-facing product release notes post (benefit-oriented) plus a Slack announcement. AI produces:
  • Structured changelog entries
  • Narrative release notes organized by Atlas/Arc audience
  • Concise Slack posts
Source material:
  • Changesets (.changeset/*.md files)
  • Conventional commits
  • Merged PRs + labels
  • Sprint demo scripts
  • Optional user-provided highlights
Human review: Always for product release notes. Technical changelog can be lighter-touch after calibration.

Pipeline


3. Product Documentation & Guides

Highest editorial attention of any domain. AI drafts from internal material; humans shape the voice and structure. AI produces:
  • Feature walkthroughs (from demo scripts)
  • Getting started guides
  • Troubleshooting docs
  • Architecture overviews
  • Concept explainers
Source material:
  • Sprint demo scripts
  • Internal design docs and ADRs
  • System requirements
  • API specs
  • Meeting transcripts
  • User-provided notes
Human review: Always required. Product docs never auto-merge.

Pipeline


4. Blog Posts & Public Communications

The most creative domain. AI scaffolds the structure and initial draft; the author provides voice, narrative, and editorial judgment. AI produces:
  • Draft posts from outlines
  • “Building in public” updates
  • Technical deep-dives
  • Announcement posts
  • “How we use AI to build AI” content
Source material:
  • Sprint work
  • ADRs and engineering decisions
  • Industry context
  • Author outlines
  • Meeting transcripts
Human review: Always. Blog posts are the most public-facing content and never auto-merge.

Pipeline

Demo Script Pipeline


Building Blocks

Complex pipelines are built from simple, composable pieces. Each block does one thing and can be developed, tested, and used independently.

Sources (Input Blocks)

These extract and normalize content from wherever it lives. Each produces a standard SourceDocument that any pipeline can consume.
BlockWhat It DoesInputOutput
changeset-readerReads .changeset/*.md files.changeset/ directoryStructured changes with semver + summary
commit-parserParses conventional commitsGit log rangeGrouped changes by type/scope
pr-collectorFetches merged PRs from GitHubRepo + date rangePR titles, descriptions, labels, linked issues
spec-exporterExports OpenAPI spec from Hono routesApp buildopenapi.json
spec-differDiffs two OpenAPI specsOld spec + new specBreaking/additive/deprecation changes
demo-script-parserExtracts features from demo scriptsDemo script markdownStructured feature sections + narratives
transcript-processorExtracts decisions from meeting transcriptsRaw transcript textDecisions, action items, key quotes
doc-indexerReads existing published docsMintlify content dirIndexed doc sections for context
issue-collectorPulls sprint issues from GitHub ProjectProject board IDCategorized issues for outline generation
user-inputAccepts freeform notes or contextText from authorPrioritized source material

Generators (Transform Blocks)

These take source documents + prompt templates and produce draft content via AI.
BlockWhat It DoesModelSources It Consumes
changelog-generatorProduces Keep a Changelog entriesHaikuchangesets, commits, PRs
release-notes-generatorProduces benefit-oriented narrativeSonnetchangesets, commits, PRs, demo script, user input
slack-announcement-generatorProduces concise Slack postHaikurelease notes output
api-enricherAdds descriptions + examples to endpointsSonnetspec, spec-diff, route handler code
migration-guide-generatorDocuments breaking changesSonnetspec-diff, existing docs
feature-guide-generatorTransforms demo narration into user guideSonnetdemo script, API spec, related docs
blog-drafterProduces structured blog post draftOpus/Sonnetvaries by topic type
demo-outline-generatorCreates sprint demo outline from issuesSonnetsprint issues, previous demo scripts
demo-script-generatorProduces timed demo script from outlineSonnetoutline, previous scripts as examples

Quality Gates (Validation Blocks)

These check generated content before it reaches a human.
BlockWhat It ChecksTool
spectral-lintOpenAPI spec validity + style rulesSpectral
vale-lintProse quality, terminology, banned wordsVale
mdx-validateValid MDX syntax, frontmatter completeremark/rehype
link-checkNo broken internal or external linksmarkdown-link-check
jargon-filterNo internal tech names in customer contentCustom rules
spec-conformanceCode examples match actual API schemaCustom validator
claim-groundingRelease note claims map to real commitsCustom checker
heading-hierarchyNo skipped heading levelsmarkdownlint
ai-slop-detectorCatches overused AI phrasesVale custom rules

Orchestration (Workflow Blocks)

These compose sources, generators, and quality gates into end-to-end pipelines.
BlockWhat It OrchestratesTrigger
api-docs-pipelinespec-exporter → spectral-lint → spec-differ → api-enricher → quality gates → PRPR merge with API changes
release-pipelinechangeset-reader + commit-parser + pr-collector + demo-script-parser → changelog-generator + release-notes-generator + slack-announcement-generator → quality gates → PRSprint end / release tag
guide-pipelinedemo-script-parser + user-input → feature-guide-generator → quality gates → PRManual trigger or feature ship
blog-pipelineVarious sources → blog-drafter → quality gates → PRManual trigger
demo-pipelineissue-collector → demo-outline-generator → (human edit) → demo-script-generator → (human edit) → final scriptSprint approaching end

How Blocks Compose

Each block is a function or script that can run independently:
# Run a single source block
pnpm pipeline source:commits --from sprint-4 --to sprint-5

# Run a single generator
pnpm pipeline generate:release-notes --sprint 5 --dry-run

# Run a full pipeline
pnpm pipeline run:release --sprint 5

# Run quality checks on any content
pnpm pipeline check content/blog/sprint-5-release.mdx
Blocks can also run as services, Trigger.dev tasks, Claude tool use, OpenAI assistants, or plain TypeScript functions. The interface is the same — input source documents, output content or validation results.

Tooling Stack

What We Have

ToolRoleStatus
MintlifyDocs hosting, blog, API reference, AI search, llms.txtADR-010 selected
GitHub ActionsCI/CD triggers, quality gatesIn use
Anthropic Claude APIPrimary AI generationAvailable
OpenAI APIAlternative generation, workflows, assistantsAvailable
CursorInteractive drafting, iterationIn use daily
Hono + ZodOpenAPI 3.1 spec generationIn use
Conventional commitsStructured commit messagesEnforced
ChangesetsMonorepo versioning + changelogSwitching to
GitHub ProjectsSprint tracking, issue managementIn use
Trigger.devDurable workflow orchestrationIn use

What We Add

ToolRoleWhy
SpectralOpenAPI linting + style enforcementIndustry standard, custom rulesets, VS Code extension for real-time linting, OWASP security rules
ValeProse linting with custom k0rdent rulesSame concept as Spectral but for written content
oasdiffOpenAPI spec diffingDetects breaking vs additive changes between versions

OpenAPI Quality with Spectral

Spectral lints and validates our OpenAPI spec against both the standard and our custom API style rules. This runs in CI on every PR that touches API routes and locally via VS Code extension.
# .spectral.yaml
extends:
  - "spectral:oas"  # Built-in OpenAPI rules
  - "https://unpkg.com/@stoplight/spectral-owasp-ruleset/dist/ruleset.mjs"  # OWASP security

rules:
  # k0rdent API standards
  k0rdent-response-envelope:
    description: "All responses must use { success, data|error, meta } envelope"
    given: "$.paths[*][*].responses[*].content['application/json'].schema"
    then:
      function: schema
      functionOptions:
        schema:
          required: ["properties"]
          properties:
            properties:
              required: ["success", "meta"]

  k0rdent-resource-ids:
    description: "Resource IDs must use prefixed format (srv_, cls_, org_, etc.)"
    given: "$.paths[*][*].responses[*].content['application/json']..examples..id"
    severity: warn
    then:
      function: pattern
      functionOptions:
        match: "^(srv|cls|org|stk|run|pool|key|evt|usr|prj)_"

  k0rdent-camelcase-fields:
    description: "All JSON fields must be camelCase"
    given: "$.paths[*][*]..properties[*]~"
    then:
      function: casing
      functionOptions:
        type: camel

  operation-description:
    description: "Every operation must have a description"
    given: "$.paths[*][*]"
    then:
      field: description
      function: truthy

  parameter-description:
    description: "Every parameter must have a description"
    given: "$.paths[*][*].parameters[*]"
    then:
      field: description
      function: truthy
Run in CI and locally:
# CI: Lint on every PR touching API routes
spectral lint openapi.json --ruleset .spectral.yaml

# Local: VS Code extension provides real-time feedback while developing routes

Orchestration Options

We’re not limited to GitHub Actions. Pipelines can run as:
OrchestratorBest ForHow
GitHub ActionsCI/CD triggers (PR merge, tag push)Workflow YAML files
Trigger.dev tasksLong-running generation, durable workflowsTypeScript task definitions
CLI scriptsLocal development, manual runs, iterationpnpm pipeline ... commands
Claude tool useInteractive drafting in Cursor or Claude DesktopMCP tools or direct API
OpenAI Assistants/WorkflowsAlternative generation, specialized agentsAssistants API with file search
Custom servicesScheduled jobs, webhook handlers, Slack integrationsHono API routes or standalone

AI Model Selection

TaskModelRationale
Technical changelogClaude HaikuFast structured transformation
Slack announcementsClaude HaikuLightweight summarization
Release notesClaude SonnetSpeed + narrative quality
API doc enrichmentClaude SonnetCode comprehension
Feature guidesClaude SonnetStructured long-form
Architecture docsClaude OpusComplex synthesis
Blog post draftsClaude Opus or SonnetDepends on depth
Demo outline from issuesClaude SonnetUnderstanding sprint context
Demo script generationClaude SonnetFollows structured template
Quality evaluationClaude HaikuFast pass/fail checks
Transcript preprocessingClaude HaikuExtract structure from noise
Alternative voiceOpenAI GPT-4oVariety, second opinion

Demo Script Workflow

The demo script has its own pipeline because it feeds everything else. Your actual workflow, enhanced with AI at each step:

Step 1: Generate Outline

Manual path: Author writes short bullet points of what shipped and what to demo. AI-assisted path: AI pulls issues from the GitHub Project board for the current sprint, categorizes by feature area, and produces a structured outline. Author cleans up, reorders, adds emphasis.
Sprint 5 Issues → AI organizes by:
  - Primary features (60-90 sec each, max 3)
  - Secondary features (20-30 sec each)
  - Architectural/process updates
  - What's next
→ Author reorders based on narrative flow

Step 2: Generate Script

AI takes the outline + past sprint demo scripts as few-shot examples and generates a timed script following the template:
  • Opening (25-30 sec): Context, what this sprint was about
  • Primary features (60-90 sec each): Screen directions, narration, key moments
  • Secondary features (20-30 sec each): Quick hits
  • What’s next (30-45 sec): Next sprint priorities
  • Closing (20-25 sec): Recap, reinforcement of core talking points
The Claude Skill in Claude Desktop can handle this generation step. If the skill needs tuning, the key inputs are: outline, past scripts as examples, timing constraints, and the k0rdent voice/tone guide.

Step 3: Author Edit Pass (Iterative)

This is where the human judgment matters most:
  1. Intro: Tune the opening message — what are the core talking points that frame everything? What narrative thread connects the demos?
  2. Feature sections: For each demo section, add/remove/update content. Ensure screen directions match actual UI. Adjust narration to sound natural when spoken.
  3. Conclusion: Quick recap that reinforces the original talking points. Brief look at what’s next. Thank you or closing messaging.
The author can iterate with AI on specific sections (“make the cluster creation section more concise”, “add a transition between the filtering demo and the multi-tenancy section”).

Step 4: Production Notes + Record

Final script includes a pre-recording checklist, key moments to nail, and recording tips. Author records the demo video.

Step 5: Script Feeds Other Pipelines

The finished demo script becomes high-quality source material for:
  • Release notes pipeline: Demo narration is already benefit-oriented language
  • Feature guide pipeline: “Here’s what I’m showing you” → “Here’s how you do it”
  • Blog post pipeline: Sprint summary or feature deep-dive

Publishing Cadence

Internal (Team + Stakeholders)

ContentCadenceChannelAudience
Sprint demo videoBi-weekly (sprint end)Slack + stakeholder meetingTeam, leadership, partners
Technical changelogEvery releaseCHANGELOG.md in repoEngineering team
Slack announcementBi-weekly#k0rdent-updates channelBroader org
Architecture decisionsAs madeADR in repo + Slack threadEngineering team
Sprint retro notesBi-weeklyInternal docsTeam

Public (Docs Site + Blog + Community)

ContentCadenceChannelAudience
API referenceContinuous (on every API change)docs.k0rdent.com/apiDevelopers
Product docs/guidesSprint-aligned updatesdocs.k0rdent.com/docsOperators, customers
Product release notesBi-weeklydocs.k0rdent.com/blogEveryone
Technical deep-diveMonthlydocs.k0rdent.com/blogEngineers, evaluators
”Building in public” updateMilestone-drivenBlog + socialIndustry, community
”AI building AI” postMonthlyBlog + social + dev communitiesDevelopers, AI practitioners

Internal → Public Flow

Not everything internal becomes public. Internal content is source material. The pipeline transforms it, removes internal details, and produces customer-appropriate output. An ADR about choosing BetterAuth becomes a blog post about “why we chose stateful sessions for enterprise security.” A sprint demo script about RLS policies becomes a feature guide about “how data isolation protects your organization.”

Launch Weeks

Launch weeks are a proven strategy for concentrated market attention. Supabase (15 launch weeks and counting), Resend, Vercel, and Cloudflare have all used them to establish category leadership. k0rdent should do the same.

Key Takeaways for k0rdent

From studying these examples:
  1. Features ship early, not on launch day. Build behind feature flags, test with early users, flip the flag on announcement day. This is already how we work with feature flags.
  2. Visual features get the most attention. Prioritize announcements that can be shown, not just described. Atlas and Arc UIs are inherently visual — lean into this.
  3. Daily content plan per announcement. Each day needs: blog post, social thread (self-contained, not just a link), demo video, and optionally an email to waitlist.
  4. Main Stage + Build Stage. Not everything is a headline announcement. Bundle smaller improvements into a “Build Stage” or “bonus announcements” track alongside the main features.
  5. Art direction matters. A cohesive visual theme across the week makes it feel like an event, not just a sequence of blog posts. Resend commissions custom illustrations. Supabase creates themed micro-sites.
  6. List on launchweek.dev. Free visibility in the dev tools community. Submit our launch week when ready.
  7. Run a retrospective. Track impressions, signups, doc traffic, and GitHub activity. Compare to baseline. Feed learnings into the next launch week.

What a Launch Week Looks Like

One week. One major announcement per day (Main Stage). Smaller surprise releases throughout (Build Stage). Builds momentum across the week with each day’s announcement reinforcing a narrative. Features are shipped behind flags and tested beforehand — launch day is the announcement, not the deploy. Example: k0rdent Launch Week 1 — “From Rack to AI” Main Stage (one per day, with blog + demo video + social thread):
DayAnnouncementContent
Monday”k0rdent is building in public”Blog: vision, what we’re building, why. Public docs site goes live.
Tuesday”Atlas: The operator console”Blog: deep-dive on Atlas capabilities. Demo video. Feature guides.
Wednesday”Arc: Self-service AI infrastructure”Blog: customer experience. Demo video. Getting started guide.
Thursday”API-first: build on k0rdent”Blog: API philosophy. Interactive API reference launch. Code examples.
Friday”How we use AI to build AI infrastructure”Blog: our AI tooling, pipeline, process. Open source what we can.
Build Stage (smaller announcements dropped alongside main stage):
  • New Spectral-powered API linting rules (open sourced)
  • CLI improvements and developer experience updates
  • Documentation search powered by AI
  • Community contribution guidelines

Launch Week Planning

Launch Week Cadence

  • Launch Week 1: Public docs launch + initial product showcase
  • Launch Week 2: ~3 months later. Major feature milestone (e.g., Stacks MVP, one-click deployments)
  • Ongoing: Quarterly consideration for launch weeks when there’s enough to announce
Not every quarter needs a launch week. Only when there’s a genuine narrative across 4-5 announcements that build on each other. Resend calls their overall approach the “Heartbeat Framework” — launch weeks are the peaks, steady content is the heartbeat between them.

Post-Launch Week

Following the Supabase “Top 10” format, publish a wrap-up post after launch week:
  • Recap all announcements (main stage + build stage)
  • Highlight community reception and metrics
  • Thank contributors and early testers
  • Tease what’s next
  • This also serves as a single shareable link for anyone who missed the week

References

Study these before planning our first launch week:
  • Resend: Launch Week Behind the Scenes — The best tactical breakdown. Covers product prioritization (large-impact, visual, requestable features), art direction, feature flags for soft-launching before the public reveal, daily content plan (blog + social + video + email), waitlist with double opt-in, and post-launch week metrics tracking. Key insight: they build features behind flags weeks before launch, get real user feedback, then flip the flag on launch day with confidence. Resend saw a 45% increase in impressions over their previous launch week with this approach. They also run a retro afterwards to improve the next one.
  • launchweek.dev — Community-maintained directory tracking every dev tool launch week in the industry. Useful for timing (avoid colliding with bigger launches), format inspiration, and understanding the landscape. Notably powered by Mintlify. Key quotes: “Launch weeks have been great for both aligning the team and getting traction within the community” (Rory Wilding, Supabase COO). Companies of all sizes participate — from solo makers to 100-person teams. We should list ours here when the time comes.
  • Supabase Launch Week — The gold standard format. Supabase pioneered this and is on Launch Week 15. Their structure: “Main Stage” (5 major announcements, one per day) plus “Build Stage” (surprise smaller releases throughout the week) plus a community hackathon plus worldwide community meetups. Read Ant Wilson’s (CTO) advice: ship features to prod a week early, not on launch day itself.
  • Supabase Launch Week 15: Top 10 — Wrap-up post format showing how to recap and amplify a launch week after it ends. Good template for our own post-launch-week content.

”AI Building AI” Content Strategy

This is a unique angle. We’re using AI tools to build AI infrastructure products. Sharing how we do this builds credibility, attracts talent, and positions k0rdent as a thought leader.

Content Ideas

TopicFormatSource Material
How we use Claude to generate sprint demo scriptsBlog postThis workflow, demo script skill, before/after examples
Our AI-powered docs pipelineBlog post + open sourcePipeline architecture, prompt library, quality results
Cursor + Claude for enterprise UI developmentBlog postDaily workflow, .cursorrules, AGENTS.md approach
How we evaluate AI-generated content qualityBlog postQuality framework, Vale rules, benchmark methodology
Building OpenAPI-first with AI assistanceBlog postHono/Zod → spec → AI enrichment → published docs
AI code review: what works and what doesn’tBlog postADR-005 decisions, tool evaluations
Designing for AI: how we structure code for AI comprehensionBlog postMonorepo patterns, type safety, naming conventions
Our prompt engineering for technical writingBlog postPrompt library, golden examples, iteration process

Monthly Cadence

One “AI building AI” post per month. Rotate through topics. Always include concrete examples and honest assessments — what worked, what didn’t, what we’d do differently.

Quality Standards

OpenAPI Quality (Spectral)

Rule CategoryExamplesSeverity
Structural validityValid OpenAPI 3.1 syntaxError
Operation descriptionsEvery endpoint has a descriptionError
Parameter descriptionsEvery parameter documentedWarning
Response envelopeUses { success, data, meta } patternError
Field namingcamelCase for all propertiesError
SecurityOWASP ruleset complianceWarning
ExamplesResponse examples present and validWarning
k0rdent conventionsPrefixed IDs, ISO timestampsWarning

Prose Quality (Vale)

RuleExamplesApplies To
Terminology”bare metal server” not “baremetal”All
Banned internal techNo Drizzle, Trigger.dev, RLS, BetterAuth, Hono, ZodCustomer-facing
MinimizersNo “simply”, “just”, “easy”All
AI slopNo “dive into”, “realm”, “landscape”, “robust”All
Active voiceRequired in procedural docsGuides
Sentence lengthFlag sentences > 30 wordsAll

Content Quality Gates

CheckToolSeverityAuto-merge eligible?
Valid MDX syntaxremark/rehypeErrorBlocks merge
Frontmatter completeCustomErrorBlocks merge
No broken linksmarkdown-link-checkErrorBlocks merge
OpenAPI spec validSpectralErrorBlocks merge
Prose qualityValeWarningFlags for review
Terminology consistencyValeWarningFlags for review
Code examples parseCustomErrorBlocks merge
API spec conformanceCustomErrorBlocks merge
Claim groundingCustomWarningFlags for review

Human Review Requirements

Content TypeAuto-merge if QC passes?Required Reviewers
API reference (additive)YesNone
API reference (breaking)NoEngineering lead
Technical changelogYes (after calibration)None
Product release notesNoProduct + Engineering lead
Feature guides (new)NoProduct + Engineering
Feature guides (update)NoEngineering lead
Blog postsNoAuthor + Editorial
Demo scriptsNoAuthor (always hand-edited)

Implementation Roadmap

Phase 1: Foundation (Weeks 1-3)

Goal: API docs linted and published, style infrastructure in place.
  • Set up Mintlify with k0rdent branding and navigation structure
  • Configure Mintlify OpenAPI integration with Hono-generated spec
  • Set up Spectral with k0rdent custom ruleset (API linting)
  • Set up Vale with k0rdent custom rules (prose linting)
  • Write style guide and glossary manually (foundation for all AI generation)
  • Set up changesets in monorepo (@changesets/cli + config)
  • CI: Spectral lint on every PR touching API routes
  • CI: Export OpenAPI spec on merge → update Mintlify
  • Write 1 golden example per content type (by hand)

Phase 2: Changelog & Release Notes (Weeks 4-6)

Goal: Every sprint produces draft release notes automatically.
  • Integrate changesets with release workflow
  • Build release notes generator (changesets + commits + PRs + demo script → Claude → narrative)
  • Build Slack announcement generator
  • Quality gate pipeline for generated content
  • Run for Sprint 5 as first real test
  • Iterate on prompt quality based on human edit rate

Phase 3: Product Docs & Guides (Weeks 7-10)

Goal: Sprint demo scripts produce feature guide drafts.
  • Build demo script → feature guide generator
  • Build doc gap scanner (compare published docs vs shipped features)
  • AI enrichment for API docs (examples, patterns, migration guides)
  • Interactive mode: iterate on drafts in Cursor
  • Tune demo script Claude Skill based on actual usage
  • Build demo outline generator from GitHub Project issues

Phase 4: Blog + Launch Week Prep (Weeks 11-14)

Goal: Regular publishing cadence, launch week content ready.
  • Blog post pipeline (outline + sources → AI draft → author edit)
  • Write first “AI building AI” blog post
  • Set up Mintlify analytics
  • Plan Launch Week 1 theme and daily topics
  • Create launch week content backlog
  • Establish bi-weekly publishing cadence
  • Configure RSS/Atom feed for changelog and blog

Open Questions

  1. Mintlify vs Fumadocs: Stay with Mintlify (managed, AI-native) or self-host with Fumadocs (full control, Next.js native)? Can decide after Phase 1 experience.
  2. Changesets + conventional commits: Both can coexist. Changesets for versioning and structured changelog. Conventional commits for commit hygiene and as additional signal for AI generation. Confirm this is the intended setup.
  3. Demo script skill quality: Test the existing Claude Desktop skill. Key inputs: outline, past scripts as examples, timing constraints, voice guide. Iterate based on output quality.
  4. Launch Week 1 timing: When is there enough to announce across 4-5 days? Likely after Stacks MVP + public docs + Arc onboarding are ready.
  5. Open source the pipeline? The docs pipeline itself could be open sourced as a “how we do it” artifact. Good for credibility but adds maintenance burden. Decide after it’s battle-tested.
  6. API playground: Interactive try-it-out in API reference? Mintlify supports this natively. Requires auth integration.

Summary

Building blocks, not monoliths. Every pipeline is composed from source blocks, generator blocks, and quality gate blocks that can be developed, tested, and orchestrated independently — through GitHub Actions, Trigger.dev, CLI scripts, Claude tools, OpenAI workflows, or custom services. Changesets for versioning, Spectral for API quality, Vale for prose quality. Three automated quality layers that catch issues before content reaches a human. Internal feeds public. Sprint demos → release notes → feature guides → blog posts. One source, many outputs. The demo script is the keystone. Launch weeks for market leadership. Concentrated attention on major milestones. “AI building AI” as a unique content angle that no competitor can replicate. Human in the loop, always. AI drafts, humans review, iterate, and ship. The quality bar is Stripe/Vercel-level documentation. Start below, iterate toward it. Style guide and golden examples are the foundation.