Style Guide & Quality Control System
Architecture: Three Layers
The answer to “skill or CI/CD” is both, plus a style guide document that feeds them. Each layer serves a different moment in the writing workflow:- Style Guide = the source of truth (human-readable, referenced by authors and AI)
- Cursor Skill = the writing assistant (catches issues interactively, suggests improvements)
- CI Gates = the safety net (blocks PRs that violate non-negotiable rules)
Layer 1: Style Guide Document
Create[STYLE_GUIDE.md](STYLE_GUIDE.md) at the repo root. This is the single source of truth that feeds both the skill and the CI rules. It covers:
Voice & Tone
Expand the four bullets in[content-strategy.md](content-strategy.md) into actionable guidance:
- Authoritative but approachable. We know this space deeply. We share what we know without talking down. Write like you’re explaining something to a smart colleague who’s new to the specific topic.
- Direct and concrete. Lead with what matters. Use active voice. Show, don’t tell. “Deploy a cluster in 3 commands” beats “k0rdent provides robust cluster deployment capabilities.”
- Honest and specific. If something has limitations, say so. If a feature is experimental, label it. Credibility compounds.
- Build-in-public energy. We’re not a finished product announcing from on high. We’re building something ambitious and bringing people along for the ride. It’s okay to be excited.
Anti-patterns (with examples)
This is where guardrails live without killing creativity:| Instead of (AI slop) | Write |
|---|---|
| ”dive into" | "look at”, “explore”, “walk through" |
| "leverage" | "use" |
| "robust” / “seamless” / “cutting-edge” | Be specific about what makes it good |
| ”simply” / “just” / “easy” | Remove it. If it were easy, they wouldn’t need docs. |
| ”In today’s rapidly evolving landscape…” | Delete the sentence entirely. Start with the point. |
| ”It’s important to note that…” | Just say the thing. |
Branding Glossary (the terminology source of truth)
| Correct | Incorrect | Notes |
|---|---|---|
| k0rdent | K0rdent, Kordent, kordent | Always lowercase k, always zero |
| k0rdent AI API | K0rdent AI Api, k0rdent api | Product names are capitalized |
| Atlas | atlas, ATLAS | k0rdent Atlas (operator console) |
| Arc | arc, ARC | k0rdent Arc (self-service portal) |
| Kubernetes | kubernetes, k8s | Always capitalize; avoid k8s in public docs |
| OpenAPI | Open API, openAPI, Openapi | One word, camelCase |
| bare metal | baremetal, bare-metal | Two words, no hyphen |
| API reference | API Reference, api reference | Lowercase “reference” |
| AI infrastructure | AI Infrastructure | Lowercase unless title case heading |
Content-Type-Specific Rules
Brief guidance per content type (API docs: imperative voice for descriptions, real examples notfoo/bar; guides: second person “you”, task-oriented headings; blog: first person plural “we”, narrative structure).
Formatting Standards
Heading hierarchy, code block conventions (always specify language), frontmatter requirements, image alt text rules.Layer 2: Automated CI Quality Gates
Vale (Prose Linting)
Vale is a prose linter that enforces style rules as code. Install it in the project and create custom k0rdent rules. File structure:**.vale.ini:**
styles/k0rdent/Branding.yml:
styles/k0rdent/AISlop.yml:
OpenAPI Linting Stack: Vacuum + Spectral + Speakeasy
Three tools, each serving a different context:- Vacuum (CI) — Already in
.github/workflows/openapi-lint.ymlviapb33f/vacuum-action@v2. Fastest linter, stricter defaults than Spectral, Go binary. Fix the current workflow: updateopenapi_pathto point at the correct spec files, setfail_on_error: true, and add a custom ruleset. - Spectral (local dev) — VS Code extension for real-time feedback while editing OpenAPI specs. Uses the same ruleset format as Vacuum, so
.spectral.yamlworks for both. Install the Spectral VS Code extension and point it at the shared ruleset. - Speakeasy Linter (large specs, optional CI) — For specs that grow past ~50k lines where Vacuum/Spectral slow down. 90+ built-in rules, very low memory. Add as an optional CI step or local tool when needed.
.spectral.yaml):
openapi-lint.yml:
markdownlint
Add[markdownlint-cli2](https://github.com/DavidAnson/markdownlint-cli2) for structural MDX checks.
**.markdownlint-cli2.jsonc:**
GitHub Actions Workflow
Create[.github/workflows/content-quality.yml](.github/workflows/content-quality.yml):
Severity Model
Not everything blocks a merge. This keeps guardrails from becoming a creativity straightjacket:| Check | Severity | Blocks merge? |
|---|---|---|
| Branding violations (k0rdent spelling) | Error | Yes |
| Broken links | Error | Yes |
| Invalid MDX syntax | Error | Yes |
| Heading hierarchy | Error | Yes |
| AI slop phrases | Warning | No (flags for review) |
| Sentence length > 30 words | Suggestion | No |
| Minimizer words (“simply”) | Warning | No |
| Internal tech names in public docs | Warning | No |
Layer 3: Cursor Skill
Create a Cursor skill at.cursor/skills/content-quality/SKILL.md that authors invoke during writing. This does things CI can’t:
- Voice consistency check: Reads the style guide and evaluates a draft against it
- Readability scoring: Flesch-Kincaid, audience-appropriateness
- Structure review: Does the doc follow the right template for its content type?
- Branding scan: Quick check of product name usage
- AI slop detection: More nuanced than regex (catches paraphrased slop too)
- Suggestions, not just flags: “This paragraph reads like AI boilerplate. Consider leading with the specific user benefit instead.”
STYLE_GUIDE.md as context and applies it to whatever file the user is editing. It’s the “quality check before you push” step.
Trigger phrases: “check content quality”, “review this doc”, “style check”, “is this ready to publish”
Human-in-the-Loop Workflow
Key principle: AI drafts, tools lint, humans decide. The automated checks catch mechanical issues (spelling, branding, broken links, structure). The human review catches judgment calls (is this accurate, is this the right level of detail, does this represent us well).Package.json Updates
Add convenience scripts:Implementation Order
Iterative, each phase is independently useful:- Style guide document — Write
STYLE_GUIDE.mdwith voice, branding glossary, anti-patterns, and content-type rules. This is the foundation everything else references. - Vale setup — Install Vale, create
.vale.iniand custom k0rdent rules. Addpnpm lint:prosescript. Test locally. - markdownlint setup — Add markdownlint-cli2, configure for MDX, exclude generated docs.
- CI workflow — Create
content-quality.ymlthat runs Vale + markdownlint + existing link check on PRs touching content. - OpenAPI lint stack — Fix Vacuum CI workflow (correct paths,
fail_on_error: true), create.spectral.yamlshared ruleset, document Spectral VS Code extension setup, note Speakeasy for large specs. - Cursor skill — Build the interactive quality check skill that reads the style guide and evaluates content.
- Iterate — Add rules as patterns emerge. Tune severity levels based on team feedback.
v2 Roadmap (Next Version)
Things to tackle after v1 is running and calibrated:AI Prose Linter (Priority)
A non-blocking CI step that runs an LLM over content diffs to catch things regex rules can’t:- Contextual AI slop detection — Catches paraphrased boilerplate, not just exact phrases. “This innovative solution enables teams to…” is slop even if no individual word triggers a Vale rule.
- Voice drift detection — Does this paragraph sound like k0rdent? Compare against golden examples.
- Readability and audience fit — Is this API doc written for the right audience? Is the guide assuming too much prior knowledge?
- Structural coherence — Does the doc have a logical flow? Are transitions abrupt?
- “Would a human write this?” score — An overall confidence score for whether content reads as authentically authored vs. AI-generated.
- Claude Haiku in CI for fast, cheap pass/fail on diffs
- Cursor skill for interactive deep review (this is v1’s skill, enhanced)
- Trigger.dev task for batch quality scoring across all docs
AI-Powered RateMyOpenAPI
Build a custom “rate my API docs” tool that scores OpenAPI specs on:- Completeness (descriptions, examples, error schemas)
- Developer experience (can someone use this without guessing?)
- Consistency (naming, patterns, envelope structure)
- k0rdent convention adherence
pnpm rate-api), a Cursor skill, or a fun public-facing web tool. Inspired by RateMyOpenAPI but tailored to k0rdent standards and powered by AI for semantic scoring beyond what Spectral/Vacuum catch.
Other v2 Ideas
- alex integration for inclusive language checking (lightweight, complementary to Vale)
- Claim grounding — Automated check that release note claims map to real commits/PRs
- Doc coverage scoring — What percentage of API endpoints have enriched docs beyond auto-generated?
- Golden example library — Curated before/after examples per content type for AI few-shot prompting