Skip to main content

Style Guide & Quality Control System

Architecture: Three Layers

The answer to “skill or CI/CD” is both, plus a style guide document that feeds them. Each layer serves a different moment in the writing workflow:
  • Style Guide = the source of truth (human-readable, referenced by authors and AI)
  • Cursor Skill = the writing assistant (catches issues interactively, suggests improvements)
  • CI Gates = the safety net (blocks PRs that violate non-negotiable rules)

Layer 1: Style Guide Document

Create [STYLE_GUIDE.md](STYLE_GUIDE.md) at the repo root. This is the single source of truth that feeds both the skill and the CI rules. It covers:

Voice & Tone

Expand the four bullets in [content-strategy.md](content-strategy.md) into actionable guidance:
  • Authoritative but approachable. We know this space deeply. We share what we know without talking down. Write like you’re explaining something to a smart colleague who’s new to the specific topic.
  • Direct and concrete. Lead with what matters. Use active voice. Show, don’t tell. “Deploy a cluster in 3 commands” beats “k0rdent provides robust cluster deployment capabilities.”
  • Honest and specific. If something has limitations, say so. If a feature is experimental, label it. Credibility compounds.
  • Build-in-public energy. We’re not a finished product announcing from on high. We’re building something ambitious and bringing people along for the ride. It’s okay to be excited.

Anti-patterns (with examples)

This is where guardrails live without killing creativity:
Instead of (AI slop)Write
”dive into""look at”, “explore”, “walk through"
"leverage""use"
"robust” / “seamless” / “cutting-edge”Be specific about what makes it good
”simply” / “just” / “easy”Remove it. If it were easy, they wouldn’t need docs.
”In today’s rapidly evolving landscape…”Delete the sentence entirely. Start with the point.
”It’s important to note that…”Just say the thing.

Branding Glossary (the terminology source of truth)

CorrectIncorrectNotes
k0rdentK0rdent, Kordent, kordentAlways lowercase k, always zero
k0rdent AI APIK0rdent AI Api, k0rdent apiProduct names are capitalized
Atlasatlas, ATLASk0rdent Atlas (operator console)
Arcarc, ARCk0rdent Arc (self-service portal)
Kuberneteskubernetes, k8sAlways capitalize; avoid k8s in public docs
OpenAPIOpen API, openAPI, OpenapiOne word, camelCase
bare metalbaremetal, bare-metalTwo words, no hyphen
API referenceAPI Reference, api referenceLowercase “reference”
AI infrastructureAI InfrastructureLowercase unless title case heading
This glossary also becomes the input for Vale substitution rules (automated enforcement).

Content-Type-Specific Rules

Brief guidance per content type (API docs: imperative voice for descriptions, real examples not foo/bar; guides: second person “you”, task-oriented headings; blog: first person plural “we”, narrative structure).

Formatting Standards

Heading hierarchy, code block conventions (always specify language), frontmatter requirements, image alt text rules.

Layer 2: Automated CI Quality Gates

Vale (Prose Linting)

Vale is a prose linter that enforces style rules as code. Install it in the project and create custom k0rdent rules. File structure:
.vale.ini                          # Vale config
styles/
  k0rdent/
    Branding.yml                   # Product name spellings
    Terminology.yml                # Preferred terms
    AISlop.yml                     # Overused AI phrases
    Minimizers.yml                 # "simply", "just", "easy"
    SentenceLength.yml             # Flag >30 word sentences
    InternalTech.yml               # Internal tool names in public docs
    Hedging.yml                    # Weak language ("might", "perhaps")
  config/
    vocabularies/
      k0rdent/
        accept.txt                 # Known good terms
        reject.txt                 # Banned terms
**.vale.ini:**
StylesPath = styles
MinAlertLevel = suggestion

Vocab = k0rdent

[content/docs/**/*.mdx]
BasedOnStyles = Vale, k0rdent

[content/blog/**/*.mdx]
BasedOnStyles = Vale, k0rdent
Example rule — styles/k0rdent/Branding.yml:
extends: substitution
message: "Use '%s' instead of '%s' for correct branding."
level: error
ignorecase: false
swap:
  K0rdent: k0rdent
  Kordent: k0rdent
  kordent: k0rdent  
  "AI Api": "AI API"
  "Open API": "OpenAPI"
  baremetal: "bare metal"
  bare-metal: "bare metal"
Example rule — styles/k0rdent/AISlop.yml:
extends: existence
message: "'%s' is an overused AI phrase. Rewrite to be more specific."
level: warning
ignorecase: true
tokens:
  - dive into
  - deep dive
  - in today's rapidly
  - it's important to note
  - leverage
  - robust
  - seamless
  - cutting-edge
  - game-changer
  - paradigm
  - harness the power
  - revolutionize
  - streamline
  - at the end of the day
  - in the realm of

OpenAPI Linting Stack: Vacuum + Spectral + Speakeasy

Three tools, each serving a different context:
  • Vacuum (CI) — Already in .github/workflows/openapi-lint.yml via pb33f/vacuum-action@v2. Fastest linter, stricter defaults than Spectral, Go binary. Fix the current workflow: update openapi_path to point at the correct spec files, set fail_on_error: true, and add a custom ruleset.
  • Spectral (local dev) — VS Code extension for real-time feedback while editing OpenAPI specs. Uses the same ruleset format as Vacuum, so .spectral.yaml works for both. Install the Spectral VS Code extension and point it at the shared ruleset.
  • Speakeasy Linter (large specs, optional CI) — For specs that grow past ~50k lines where Vacuum/Spectral slow down. 90+ built-in rules, very low memory. Add as an optional CI step or local tool when needed.
Shared ruleset (.spectral.yaml):
extends:
  - "spectral:oas"
rules:
  k0rdent-response-envelope:
    description: "All responses must use { success, data|error, meta } envelope"
    given: "$.paths[*][*].responses[*].content['application/json'].schema"
    severity: error
    then:
      function: schema
      functionOptions:
        schema:
          required: ["properties"]
          properties:
            properties:
              required: ["success", "meta"]
  operation-description:
    description: "Every operation must have a description"
    given: "$.paths[*][*]"
    severity: error
    then:
      field: description
      function: truthy
  parameter-description:
    description: "Every parameter must have a description"
    given: "$.paths[*][*].parameters[*]"
    severity: warn
    then:
      field: description
      function: truthy
  k0rdent-camelcase-fields:
    description: "All JSON fields must be camelCase"
    given: "$.paths[*][*]..properties[*]~"
    severity: error
    then:
      function: casing
      functionOptions:
        type: camel
Updated openapi-lint.yml:
name: OpenAPI Quality
on:
  pull_request:
    paths:
      - "content/docs/api-docs/**"
      - ".spectral.yaml"
  push:
    branches: [main]
jobs:
  vacuum-lint:
    name: OpenAPI linting (Vacuum)
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: pb33f/vacuum-action@v2
        with:
          openapi_path: "content/docs/api-docs/bundled/"
          ruleset: ".spectral.yaml"
          fail_on_error: true
          github_token: ${{ secrets.GITHUB_TOKEN }}

markdownlint

Add [markdownlint-cli2](https://github.com/DavidAnson/markdownlint-cli2) for structural MDX checks. **.markdownlint-cli2.jsonc:**
{
  "config": {
    "MD001": true,          // heading-increment (no skipping h2 to h4)
    "MD003": { "style": "atx" },
    "MD013": false,         // line-length (disabled, prettier handles this)
    "MD033": false,         // no-inline-html (MDX needs this)
    "MD041": true,          // first-line-heading
    "MD046": { "style": "fenced" },
    "MD049": { "style": "underscore" },
    "MD050": { "style": "asterisk" }
  },
  "globs": ["content/docs/**/*.mdx"],
  "ignores": ["content/docs/(generated)/**"]
}

GitHub Actions Workflow

Create [.github/workflows/content-quality.yml](.github/workflows/content-quality.yml):
name: Content Quality

on:
  pull_request:
    paths:
      - "content/**/*.mdx"
      - "content/**/*.md"
      - "styles/**"
      - ".vale.ini"

jobs:
  prose-lint:
    name: Prose quality (Vale)
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: errata-ai/vale-action@v2
        with:
          files: content/
          reporter: github-pr-review  # Posts inline PR comments

  structure-lint:
    name: Structure (markdownlint)
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: DavidAnson/markdownlint-cli2-action@v19
        with:
          globs: "content/docs/**/*.mdx"

  link-check:
    name: Link validation
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: pnpm/action-setup@v4
        with:
          version: 10
      - uses: actions/setup-node@v6
        with:
          node-version: "22"
          cache: pnpm
      - run: pnpm install --frozen-lockfile
      - run: pnpm lint

Severity Model

Not everything blocks a merge. This keeps guardrails from becoming a creativity straightjacket:
CheckSeverityBlocks merge?
Branding violations (k0rdent spelling)ErrorYes
Broken linksErrorYes
Invalid MDX syntaxErrorYes
Heading hierarchyErrorYes
AI slop phrasesWarningNo (flags for review)
Sentence length > 30 wordsSuggestionNo
Minimizer words (“simply”)WarningNo
Internal tech names in public docsWarningNo
Warnings show up as PR review comments. Authors decide whether to fix or override. This is the “guardrails but flexibility” balance.

Layer 3: Cursor Skill

Create a Cursor skill at .cursor/skills/content-quality/SKILL.md that authors invoke during writing. This does things CI can’t:
  • Voice consistency check: Reads the style guide and evaluates a draft against it
  • Readability scoring: Flesch-Kincaid, audience-appropriateness
  • Structure review: Does the doc follow the right template for its content type?
  • Branding scan: Quick check of product name usage
  • AI slop detection: More nuanced than regex (catches paraphrased slop too)
  • Suggestions, not just flags: “This paragraph reads like AI boilerplate. Consider leading with the specific user benefit instead.”
The skill reads STYLE_GUIDE.md as context and applies it to whatever file the user is editing. It’s the “quality check before you push” step. Trigger phrases: “check content quality”, “review this doc”, “style check”, “is this ready to publish”

Human-in-the-Loop Workflow

Key principle: AI drafts, tools lint, humans decide. The automated checks catch mechanical issues (spelling, branding, broken links, structure). The human review catches judgment calls (is this accurate, is this the right level of detail, does this represent us well).

Package.json Updates

Add convenience scripts:
{
  "lint:prose": "vale content/",
  "lint:structure": "markdownlint-cli2 'content/docs/**/*.mdx'",
  "lint:all": "pnpm lint && pnpm lint:prose && pnpm lint:structure",
  "quality": "pnpm lint:all"
}

Implementation Order

Iterative, each phase is independently useful:
  1. Style guide document — Write STYLE_GUIDE.md with voice, branding glossary, anti-patterns, and content-type rules. This is the foundation everything else references.
  2. Vale setup — Install Vale, create .vale.ini and custom k0rdent rules. Add pnpm lint:prose script. Test locally.
  3. markdownlint setup — Add markdownlint-cli2, configure for MDX, exclude generated docs.
  4. CI workflow — Create content-quality.yml that runs Vale + markdownlint + existing link check on PRs touching content.
  5. OpenAPI lint stack — Fix Vacuum CI workflow (correct paths, fail_on_error: true), create .spectral.yaml shared ruleset, document Spectral VS Code extension setup, note Speakeasy for large specs.
  6. Cursor skill — Build the interactive quality check skill that reads the style guide and evaluates content.
  7. Iterate — Add rules as patterns emerge. Tune severity levels based on team feedback.

v2 Roadmap (Next Version)

Things to tackle after v1 is running and calibrated:

AI Prose Linter (Priority)

A non-blocking CI step that runs an LLM over content diffs to catch things regex rules can’t:
  • Contextual AI slop detection — Catches paraphrased boilerplate, not just exact phrases. “This innovative solution enables teams to…” is slop even if no individual word triggers a Vale rule.
  • Voice drift detection — Does this paragraph sound like k0rdent? Compare against golden examples.
  • Readability and audience fit — Is this API doc written for the right audience? Is the guide assuming too much prior knowledge?
  • Structural coherence — Does the doc have a logical flow? Are transitions abrupt?
  • “Would a human write this?” score — An overall confidence score for whether content reads as authentically authored vs. AI-generated.
Implementation options:
  • Claude Haiku in CI for fast, cheap pass/fail on diffs
  • Cursor skill for interactive deep review (this is v1’s skill, enhanced)
  • Trigger.dev task for batch quality scoring across all docs
Posts as a non-blocking PR comment with a quality score and specific suggestions. Never blocks merge — it advises.

AI-Powered RateMyOpenAPI

Build a custom “rate my API docs” tool that scores OpenAPI specs on:
  • Completeness (descriptions, examples, error schemas)
  • Developer experience (can someone use this without guessing?)
  • Consistency (naming, patterns, envelope structure)
  • k0rdent convention adherence
Could be a CLI tool (pnpm rate-api), a Cursor skill, or a fun public-facing web tool. Inspired by RateMyOpenAPI but tailored to k0rdent standards and powered by AI for semantic scoring beyond what Spectral/Vacuum catch.

Other v2 Ideas

  • alex integration for inclusive language checking (lightweight, complementary to Vale)
  • Claim grounding — Automated check that release note claims map to real commits/PRs
  • Doc coverage scoring — What percentage of API endpoints have enriched docs beyond auto-generated?
  • Golden example library — Curated before/after examples per content type for AI few-shot prompting