Skip to content

Architecture: @libar-dev/delivery-process

Code-Driven Documentation Generator with Codec-Based Transformation Pipeline

This document describes the architecture of the @libar-dev/delivery-process package, a documentation generator that extracts patterns from TypeScript and Gherkin sources, transforms them through a unified pipeline, and renders them as markdown via typed codecs.


  1. Executive Summary
  2. Configuration Architecture
  3. Four-Stage Pipeline
  4. Unified Transformation Architecture
  5. Codec Architecture
  6. Available Codecs
  7. Progressive Disclosure
  8. Source Systems
  9. Key Design Patterns
  10. Data Flow Diagrams
  11. Workflow Integration
  12. Programmatic Usage
  13. Extending the System
  14. Quick Reference

The @libar-dev/delivery-process package generates LLM-optimized documentation from dual sources:

  • TypeScript code with configurable JSDoc annotations (e.g., @docs-* or @libar-docs-*)
  • Gherkin feature files with matching tags

The tag prefix is configurable via presets or custom configuration (see Configuration Architecture).

PrincipleDescription
Single Source of TruthCode + .feature files are authoritative; docs are generated projections
Single-Pass TransformationAll derived views computed in O(n) time, not redundant O(n) per section
Codec-Based RenderingZod 4 codecs transform MasterDataset → RenderableDocument → Markdown
Schema-First ValidationZod schemas define types; runtime validation at all boundaries
Single Read ModelMasterDataset is the sole read model for all consumers — codecs, validators, query API (ADR-006)
Result MonadExplicit error handling via Result<T, E> instead of exceptions
Four-Stage Pipeline
┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
│ SCANNER │ → │ EXTRACTOR │ → │ TRANSFORMER │ → │ CODEC │
│ │ │ │ │ │ │ │
│ TypeScript │ │ ExtractedP- │ │ MasterDataset │ │ Renderable │
│ Gherkin │ │ attern[] │ │ (pre-computed │ │ Document │
│ Files │ │ │ │ views) │ │ → Markdown │
└─────────────┘ └─────────────┘ └─────────────────┘ └─────────────┘
┌─────────────┐
│ CONFIG │ defineConfig() → resolveProjectConfig() → ResolvedConfig
└─────────────┘

The package supports configurable tag prefixes via the Configuration API.

delivery-process.config.ts
import { defineConfig } from '@libar-dev/delivery-process/config';
export default defineConfig({
preset: 'libar-generic',
sources: { typescript: ['src/**/*.ts'], features: ['specs/*.feature'] },
output: { directory: 'docs-generated', overwrite: true },
});
// Resolved to: ResolvedConfig { instance, project, isDefault, configPath }
StageConfiguration InputEffect
ScannerregexBuilders.hasFileOptIn()Detects files with configured opt-in marker
ScannerregexBuilders.directivePatternMatches tags with configured prefix
Extractorregistry.categoriesMaps tags to category names
TransformerregistryBuilds MasterDataset with category indexes
defineConfig(userConfig)
┌──────────────────────────────────────────┐
│ 1. loadProjectConfig() discovers file │
│ and validates via Zod schema │
└──────────────────────────────────────────┘
┌──────────────────────────────────────────┐
│ 2. resolveProjectConfig() │
│ - Select preset (or use default) │
│ - Apply tagPrefix/fileOptInTag/cats │
│ - Build registry + RegexBuilders │
│ - Merge stubs into TypeScript sources │
│ - Apply output defaults │
│ - Resolve generator overrides │
└──────────────────────────────────────────┘
ResolvedConfig { instance, project, isDefault, configPath }
FilePurpose
src/config/define-config.tsdefineConfig() identity function for type-safe authoring
src/config/project-config.tsDeliveryProcessProjectConfig, ResolvedConfig types
src/config/project-config-schema.tsZod validation schema, isProjectConfig() type guard
src/config/resolve-config.tsresolveProjectConfig() — defaults + taxonomy resolution
src/config/merge-sources.tsmergeSourcesForGenerator() — per-generator sources
src/config/config-loader.tsloadProjectConfig() — file discovery + loading
src/config/factory.tscreateDeliveryProcess() — taxonomy factory (internal)
src/config/presets.tsGENERIC_PRESET, LIBAR_GENERIC_PRESET, DDD_ES_CQRS_PRESET

See: CONFIGURATION.md for usage examples and API reference.


The pipeline has two entry points. The orchestrator (src/generators/orchestrator.ts) runs all 10 steps end-to-end for documentation generation. The shared pipeline factory buildMasterDataset() (src/generators/pipeline/build-pipeline.ts) runs steps 1-8 and returns a Result<PipelineResult, PipelineError> for CLI consumers like process-api and validate-patterns (see Pipeline Factory).

Purpose: Discover source files and parse them into structured AST representations.

Scanner TypeInputOutputKey File
TypeScript.ts files with @libar-docsScannedFile[]src/scanner/pattern-scanner.ts
Gherkin.feature filesScannedGherkinFile[]src/scanner/gherkin-scanner.ts

TypeScript Scanning Flow:

findFilesToScan() → hasFileOptIn() → parseFileDirectives()
(glob patterns) (@libar-docs check) (AST extraction)

Gherkin Scanning Flow:

findFeatureFiles() → parseFeatureFile() → extractPatternTags()
(glob patterns) (Cucumber parser) (tag extraction)

Purpose: Convert scanned files into normalized ExtractedPattern objects.

Key Files:

  • src/extractor/doc-extractor.ts:extractPatterns() - Pattern extraction
  • src/extractor/shape-extractor.ts - Shape extraction (3 modes)

Shape Extraction Modes:

ModeTriggerBehavior
Explicit names@libar-docs-extract-shapes Foo, BarExtracts named declarations only
Wildcard auto-discovery@libar-docs-extract-shapes *Extracts all exported declarations from file
Declaration-level@libar-docs-shape on individual declExtracts tagged declarations (exported or not)

Shapes now include params, returns, and throws fields (parsed from @param/@returns/@throws JSDoc tags on function shapes), and an optional group field from the @libar-docs-shape tag value. ExportInfo includes an optional signature field for function/const/class declarations.

interface ExtractedPattern {
id: string; // pattern-{8-char-hex}
name: string;
category: string;
directive: DocDirective;
code: string;
source: SourceInfo; // { file, lines: [start, end] }
// Metadata from annotations
patternName?: string;
status?: PatternStatus; // roadmap|active|completed|deferred
phase?: number;
quarter?: string; // Q1-2025
release?: string; // v0.1.0 or vNEXT
useCases?: string[];
uses?: string[];
usedBy?: string[];
dependsOn?: string[];
enables?: string[];
// ... 30+ additional fields
}

Dual-Source Merging:

After extraction, patterns from both sources are merged with conflict detection. Merge behavior varies by consumer: 'fatal' mode (used by process-api and orchestrator) returns an error if the same pattern name exists in both TypeScript and Gherkin; 'concatenate' mode (used by validate-patterns) falls back to concatenation on conflict, since the validator needs both sources for cross-source matching.

ADR-006 established the Single Read Model Architecture: the MasterDataset is the sole read model for all consumers. The shared pipeline factory extracts the 8-step scan-extract-merge-transform pipeline into a reusable function.

Key File: src/generators/pipeline/build-pipeline.ts

Signature:

function buildMasterDataset(
options: PipelineOptions
): Promise<Result<PipelineResult, PipelineError>>;

PipelineOptions:

FieldTypeDescription
inputreadonly string[]TypeScript source glob patterns
featuresreadonly string[]Gherkin feature glob patterns
baseDirstringBase directory for glob resolution
mergeConflictStrategy'fatal' | 'concatenate'How to handle duplicate pattern names across sources
excludereadonly string[] (optional)Glob patterns to exclude from scanning
workflowPathstring (optional)Custom workflow config JSON path
contextInferenceRulesreadonly ContextInferenceRule[] (optional)Custom context inference rules
includeValidationboolean (optional)When false, skip validation pass (default true)
failOnScanErrorsboolean (optional)When true, return error on scan failures (default false)

PipelineResult:

FieldTypeDescription
datasetRuntimeMasterDatasetThe fully-computed read model
validationValidationSummarySchema validation results for all patterns
warningsreadonly PipelineWarning[]Structured non-fatal warnings
scanMetadataScanMetadataAggregate scan counts for reporting

PipelineWarning:

FieldTypeDescription
type'scan' | 'extraction' | 'gherkin-parse'Warning category
messagestringHuman-readable description
countnumber (optional)Number of affected items
detailsreadonly PipelineWarningDetail[] (optional)File-level diagnostics

ScanMetadata:

FieldTypeDescription
scannedFileCountnumberTotal files successfully scanned
scanErrorCountnumberFiles that failed to scan
skippedDirectiveCountnumberInvalid directives skipped
gherkinErrorCountnumberFeature files that failed to parse

PipelineError:

FieldTypeDescription
stepstringPipeline step that failed (e.g., 'config', 'merge')
messagestringHuman-readable error description

Consumer Table:

ConsumermergeConflictStrategyError Handling
process-api'fatal'Maps to process.exit(1)
validate-patterns'concatenate'Falls back to concatenation
orchestratorinline (equivalent to 'fatal')Inline error reporting

Consumer Layers (ADR-006):

LayerMay ImportExamples
Pipeline Orchestrationscanner/, extractor/, pipeline/orchestrator.ts, pipeline setup in CLI entry points
Feature ConsumptionMasterDataset, relationshipIndexcodecs, ProcessStateAPI, validators, query handlers

Named Anti-Patterns (ADR-006):

Anti-PatternDetection Signal
Parallel PipelineFeature consumer imports from scanner/ or extractor/
Lossy Local TypeLocal interface with subset of ExtractedPattern fields + dedicated extraction function
Re-derived RelationshipBuilding Map or Set from pattern.implementsPatterns, uses, or dependsOn in consumer code

Purpose: Compute all derived views in a single O(n) pass.

Key File: src/generators/pipeline/transform-dataset.ts:transformToMasterDataset()

This is the key innovation of the unified pipeline. Instead of each section calling .filter() repeatedly:

// OLD: Each section filters independently - O(n) per section
const completed = patterns.filter((p) => normalizeStatus(p.status) === 'completed');
const active = patterns.filter((p) => normalizeStatus(p.status) === 'active');
const phase3 = patterns.filter((p) => p.phase === 3);

The transformer computes ALL views upfront:

// NEW: Single-pass transformation - O(n) total
const masterDataset = transformToMasterDataset({ patterns, tagRegistry, workflow });
// Sections access pre-computed views - O(1)
const completed = masterDataset.byStatus.completed;
const phase3 = masterDataset.byPhase.find((p) => p.phaseNumber === 3);

Purpose: Transform MasterDataset into RenderableDocument, then render to markdown.

Key Files:

  • src/renderable/codecs/*.ts - Document codecs
  • src/renderable/render.ts - Markdown renderer
// Codec transforms to universal intermediate format
const doc = PatternsDocumentCodec.decode(masterDataset);
// Renderer produces markdown files
const files = renderDocumentWithFiles(doc, 'PATTERNS.md');

Key File: src/validation-schemas/master-dataset.ts

The MasterDataset is the central data structure containing all pre-computed views:

interface MasterDataset {
// ─── Raw Data ───────────────────────────────────────────────────────────
patterns: ExtractedPattern[];
tagRegistry: TagRegistry;
// ─── Pre-computed Views (O(1) access) ───────────────────────────────────
byStatus: {
completed: ExtractedPattern[]; // status: completed
active: ExtractedPattern[]; // status: active
planned: ExtractedPattern[]; // status: roadmap|planned|undefined
};
byPhase: Array<{
phaseNumber: number;
phaseName?: string; // From workflow config
patterns: ExtractedPattern[];
counts: StatusCounts; // Pre-computed per-phase counts
}>; // Sorted by phase number ascending
byQuarter: Record<string, ExtractedPattern[]>; // e.g., "Q4-2024"
byCategory: Record<string, ExtractedPattern[]>;
bySource: {
typescript: ExtractedPattern[]; // From .ts files
gherkin: ExtractedPattern[]; // From .feature files
roadmap: ExtractedPattern[]; // Has phase metadata
prd: ExtractedPattern[]; // Has productArea/userRole/businessValue
};
// ─── Aggregate Statistics ───────────────────────────────────────────────
counts: StatusCounts; // { completed, active, planned, total }
phaseCount: number;
categoryCount: number;
// ─── Relationship Index (10 fields) ─────────────────────────────────────
relationshipIndex?: Record<
string,
{
// Forward relationships (from annotations)
uses: string[]; // @libar-docs-uses
dependsOn: string[]; // @libar-docs-depends-on
implementsPatterns: string[]; // @libar-docs-implements
extendsPattern?: string; // @libar-docs-extends
seeAlso: string[]; // @libar-docs-see-also
apiRef: string[]; // @libar-docs-api-ref
// Reverse lookups (computed by transformer)
usedBy: string[]; // inverse of uses
enables: string[]; // inverse of dependsOn
implementedBy: ImplementationRef[]; // inverse of implementsPatterns (with file paths)
extendedBy: string[]; // inverse of extendsPattern
}
>;
// ─── Architecture Data (optional) ──────────────────────────────────────
archIndex?: {
byRole: Record<string, ExtractedPattern[]>;
byContext: Record<string, ExtractedPattern[]>;
byLayer: Record<string, ExtractedPattern[]>;
byView: Record<string, ExtractedPattern[]>;
all: ExtractedPattern[];
};
}

The runtime type extends MasterDataset with non-serializable workflow:

// transform-dataset.ts:50-53
interface RuntimeMasterDataset extends MasterDataset {
readonly workflow?: LoadedWorkflow; // Contains Maps - not JSON-serializable
}

The transformToMasterDataset() function iterates over patterns exactly once, accumulating all views:

// transform-dataset.ts:98-235 (simplified)
export function transformToMasterDataset(raw: RawDataset): RuntimeMasterDataset {
// Initialize accumulators
const byStatus: StatusGroups = { completed: [], active: [], planned: [] };
const byPhaseMap = new Map<number, ExtractedPattern[]>();
const byQuarter: Record<string, ExtractedPattern[]> = {};
const byCategoryMap = new Map<string, ExtractedPattern[]>();
const bySource: SourceViews = { typescript: [], gherkin: [], roadmap: [], prd: [] };
// Single pass over all patterns
for (const pattern of patterns) {
// Status grouping
const status = normalizeStatus(pattern.status);
byStatus[status].push(pattern);
// Phase grouping (also adds to roadmap)
if (pattern.phase !== undefined) {
byPhaseMap.get(pattern.phase)?.push(pattern) ?? byPhaseMap.set(pattern.phase, [pattern]);
bySource.roadmap.push(pattern);
}
// Quarter grouping
if (pattern.quarter) {
byQuarter[pattern.quarter] ??= [];
byQuarter[pattern.quarter].push(pattern);
}
// Category grouping
byCategoryMap.get(pattern.category)?.push(pattern) ?? /* ... */;
// Source grouping (typescript vs gherkin)
// PRD grouping (has productArea/userRole/businessValue)
// Relationship index building
}
// Build sorted phase groups with counts
const byPhase = Array.from(byPhaseMap.entries())
.sort(([a], [b]) => a - b)
.map(([phaseNumber, patterns]) => ({ phaseNumber, patterns, counts: computeCounts(patterns) }));
return { patterns, tagRegistry, byStatus, byPhase, byQuarter, byCategory, bySource, counts, /* ... */ };
}

The delivery-process package uses a codec-based architecture for document generation:

MasterDataset → Codec.decode() → RenderableDocument ─┬→ renderToMarkdown → Markdown Files
├→ renderToClaudeMdModule → Modular Claude.md
└→ renderToClaudeContext → Token-efficient text
ComponentDescription
MasterDatasetAggregated view of all extracted patterns with indexes by category, phase, status
CodecZod 4 codec that transforms MasterDataset into RenderableDocument
RenderableDocumentUniversal intermediate format with typed section blocks
renderToMarkdownDomain-agnostic markdown renderer for human documentation
renderToClaudeMdModuleModular-claude-md renderer (H3-rooted headings, omits Mermaid/link-outs)
renderToClaudeContextLLM-optimized renderer (~20-40% fewer tokens, omits Mermaid, flattens collapsibles) (legacy)

The RenderableDocument uses a fixed vocabulary of section blocks:

CategoryBlock Types
Structuralheading, paragraph, separator
Contenttable, list, code, mermaid
Progressivecollapsible, link-out

Every codec provides two exports:

// Default codec with standard options
import { PatternsDocumentCodec } from './codecs';
const doc = PatternsDocumentCodec.decode(dataset);
// Factory for custom options
import { createPatternsCodec } from './codecs';
const codec = createPatternsCodec({ generateDetailFiles: false });
const doc = codec.decode(dataset);

Note: Codec options shown below are illustrative. For complete and current options, see the source files in src/renderable/codecs/ and src/generators/types.ts.

Purpose: Pattern registry with category-based organization.

Output Files:

  • PATTERNS.md - Main index with progress summary, navigation, and pattern table
  • patterns/<category>.md - Detail files per category (when progressive disclosure enabled)

Options (PatternsCodecOptions):

OptionTypeDefaultDescription
generateDetailFilesbooleantrueCreate category detail files
detailLevel"summary" | "standard" | "detailed""standard"Output verbosity
includeDependencyGraphbooleantrueRender Mermaid dependency graph
includeUseCasesbooleantrueShow use cases section
filterCategoriesstring[][]Filter to specific categories (empty = all)
limits.recentItemsnumber10Max recent items in summaries
limits.collapseThresholdnumber5Items before collapsing

Purpose: Product requirements documentation grouped by product area or user role.

Output Files:

  • PRODUCT-REQUIREMENTS.md - Main requirements index
  • requirements/<area-slug>.md - Detail files per product area

Options (RequirementsCodecOptions):

OptionTypeDefaultDescription
generateDetailFilesbooleantrueCreate product area detail files
groupBy"product-area" | "user-role" | "phase""product-area"Primary grouping
filterStatusNormalizedStatusFilter[][]Filter by status (empty = all)
includeScenarioStepsbooleantrueShow Given/When/Then steps
includeBusinessValuebooleantrueDisplay business value metadata
includeBusinessRulesbooleantrueShow Gherkin Rule: sections

Purpose: Development roadmap organized by phase with progress tracking.

Output Files:

  • ROADMAP.md - Main roadmap with phase navigation and quarterly timeline
  • phases/phase-<N>-<name>.md - Detail files per phase

Options (RoadmapCodecOptions):

OptionTypeDefaultDescription
generateDetailFilesbooleantrueCreate phase detail files
filterStatusNormalizedStatusFilter[][]Filter by status
includeProcessbooleantrueShow quarter, effort, team metadata
includeDeliverablesbooleantrueList deliverables per phase
filterPhasesnumber[][]Filter to specific phases

Purpose: Historical record of completed work organized by quarter.

Output Files:

  • COMPLETED-MILESTONES.md - Summary with completed phases and recent completions
  • milestones/<quarter>.md - Detail files per quarter (e.g., Q1-2026.md)

Purpose: Active development work currently in progress.

Output Files:

  • CURRENT-WORK.md - Summary of active phases and patterns
  • current/phase-<N>-<name>.md - Detail files for active phases

Purpose: Keep a Changelog format changelog grouped by release version.

Output Files:

  • CHANGELOG.md - Changelog with [vNEXT], [v0.1.0] sections

Options (ChangelogCodecOptions):

OptionTypeDefaultDescription
includeUnreleasedbooleantrueInclude unreleased section
includeLinksbooleantrueInclude links
categoryMappingRecord<string, string>{}Map categories to changelog types

Purpose: Current session context for AI agents and developers.

Output Files:

  • SESSION-CONTEXT.md - Session status, active work, current phase focus
  • sessions/phase-<N>-<name>.md - Detail files for incomplete phases

Purpose: Aggregate view of all incomplete work across phases.

Output Files:

  • REMAINING-WORK.md - Summary by phase, priority breakdown, next actionable
  • remaining/phase-<N>-<name>.md - Detail files per incomplete phase

Options (RemainingWorkCodecOptions):

OptionTypeDefaultDescription
includeIncompletebooleantrueInclude planned items
includeBlockedbooleantrueShow blocked items analysis
includeNextActionablebooleantrueNext actionable items section
maxNextActionablenumber5Max items in next actionable
sortBy"phase" | "priority" | "effort" | "quarter""phase"Sort order
groupPlannedBy"quarter" | "priority" | "level" | "none""none"Group planned items

Purpose: Pre-planning questions and Definition of Done validation.

Output Files: PLANNING-CHECKLIST.md

Purpose: Implementation plans for coding sessions.

Output Files: SESSION-PLAN.md

Purpose: Retrospective discoveries for roadmap refinement.

Output Files: SESSION-FINDINGS.md

Finding Sources:

  • pattern.discoveredGaps - Gap findings
  • pattern.discoveredImprovements - Improvement suggestions
  • pattern.discoveredRisks / pattern.risk - Risk findings
  • pattern.discoveredLearnings - Learned insights

Purpose: Architecture Decision Records extracted from patterns with @libar-docs-adr tags.

Output Files:

  • DECISIONS.md - ADR index with summary and grouping
  • decisions/<category-slug>.md - Detail files per category

Purpose: PR-scoped view filtered by changed files or release version.

Output Files: working/PR-CHANGES.md

Purpose: Timeline to behavior file coverage report.

Output Files: TRACEABILITY.md

Purpose: Project architecture and status overview.

Output Files: OVERVIEW.md

Purpose: Business rules documentation organized by product area, phase, and feature. Extracts domain constraints from Gherkin Rule: blocks.

Output Files:

  • BUSINESS-RULES.md - Main index with statistics and all rules

Options (BusinessRulesCodecOptions extends BaseCodecOptions):

OptionTypeDefaultDescription
groupBy"domain" | "phase" | "domain-then-phase""domain-then-phase"Primary grouping strategy
includeCodeExamplesbooleanfalseInclude code examples from DocStrings
includeTablesbooleantrueInclude markdown tables from descriptions
includeRationalebooleantrueInclude rationale section per rule
filterDomainsstring[][]Filter by domain categories (empty = all)
filterPhasesnumber[][]Filter by phases (empty = all)
onlyWithInvariantsbooleanfalseShow only rules with explicit invariants
includeSourcebooleantrueInclude source feature file link
includeVerifiedBybooleantrueInclude “Verified by” scenario links
maxDescriptionLengthnumber150Max description length in standard mode
excludeSourcePathsstring[][]Exclude patterns by source path prefix

Purpose: Architecture diagrams (Mermaid) generated from source annotations. Supports component and layered views.

Output Files:

  • ARCHITECTURE.md (generated) - Architecture diagrams with component inventory

Options (ArchitectureCodecOptions extends BaseCodecOptions):

OptionTypeDefaultDescription
diagramType"component" | "layered""component"Type of diagram to generate
includeInventorybooleantrueInclude component inventory table
includeLegendbooleantrueInclude legend for arrow styles
filterContextsstring[][]Filter to specific contexts (empty = all)

Purpose: Taxonomy reference documentation with tag definitions, preset comparison, and format type reference.

Output Files:

  • TAXONOMY.md - Main taxonomy reference
  • taxonomy/*.md - Detail files per tag domain

Options (TaxonomyCodecOptions extends BaseCodecOptions):

OptionTypeDefaultDescription
includePresetsbooleantrueInclude preset comparison table
includeFormatTypesbooleantrueInclude format type reference
includeArchDiagrambooleantrueInclude architecture diagram
groupByDomainbooleantrueGroup metadata tags by domain

Purpose: Process Guard validation rules reference with FSM diagrams and protection level matrix.

Output Files:

  • VALIDATION-RULES.md - Main validation rules reference
  • validation/*.md - Detail files per rule category

Options (ValidationRulesCodecOptions extends BaseCodecOptions):

OptionTypeDefaultDescription
includeFSMDiagrambooleantrueInclude FSM state diagram
includeCLIUsagebooleantrueInclude CLI usage section
includeEscapeHatchesbooleantrueInclude escape hatches section
includeProtectionMatrixbooleantrueInclude protection levels matrix

Purpose: Scoped reference documentation assembling four content layers into a single document.

Output Files:

  • Configured per-instance (e.g., docs/REFERENCE-SAMPLE.md, _claude-md/architecture/reference-sample.md)

4-Layer Composition (in order):

  1. Convention content — Extracted from @libar-docs-convention-tagged patterns (rules, invariants, tables)
  2. Scoped diagrams — Mermaid diagrams filtered by archContext, archLayer, patterns, or archView
  3. TypeScript shapes — API surfaces from shapeSources globs or shapeSelectors (declaration-level filtering)
  4. Behavior content — Gherkin-sourced patterns from behaviorCategories

Diagram Types (via DiagramScope.diagramType):

TypeDescription
graph (default)Flowchart with subgraphs by archContext, custom node shapes
sequenceDiagramSequence diagram with typed messages between participants
stateDiagram-v2State diagram with transitions from dependsOn relationships
C4ContextC4 context diagram with boundaries, systems, and relationships
classDiagramClass diagram with <<archRole>> stereotypes and typed arrows

Key Options (ReferenceDocConfig):

OptionTypeDescription
diagramScopeDiagramScopeSingle diagram configuration
diagramScopesDiagramScope[]Multiple diagrams (takes precedence)
shapeSourcesstring[]Glob patterns for shape extraction
shapeSelectorsShapeSelector[]Fine-grained declaration-level shape filtering
behaviorCategoriesstring[]Category tags for behavior pattern content
conventionTagsstring[]Convention tag values to include

ShapeSelector Variants:

VariantExampleBehavior
{ group: string }{ group: "api-types" }Match shapes by group tag
{ source, names }{ source: "src/types.ts", names: ["Config"] }Named shapes from file
{ source }{ source: "src/**/*.ts" }All shapes from glob

Purpose: Assembles documents from multiple child codecs into a single RenderableDocument.

Key Exports:

  • createCompositeCodec(codecs, options) — Factory that decodes each child codec against the same MasterDataset and composes their outputs
  • composeDocuments(documents, options) — Pure document-level composition (concatenates sections, merges additionalFiles with last-wins semantics)

Options (CompositeCodecOptions):

OptionTypeDefaultDescription
titlestringDocument title
purposestringDocument purpose for frontmatter
separateSectionsbooleantrueInsert separator blocks between codecs

Progressive disclosure splits large documents into a main index plus detail files. This improves readability and enables focused navigation.

  1. Main document contains summaries and navigation links
  2. Detail files contain full information for each grouping
  3. link-out blocks in main doc point to detail files
  4. additionalFiles in RenderableDocument specifies detail paths
CodecSplit ByDetail Path Pattern
patternsCategorypatterns/<category>.md
roadmapPhasephases/phase-<N>-<name>.md
milestonesQuartermilestones/<quarter>.md
currentActive Phasecurrent/phase-<N>-<name>.md
requirementsProduct Arearequirements/<area-slug>.md
sessionIncomplete Phasesessions/phase-<N>-<name>.md
remainingIncomplete Phaseremaining/phase-<N>-<name>.md
adrsCategory (≥ threshold)decisions/<category-slug>.md
taxonomyTag Domaintaxonomy/<domain>.md
validation-rulesRule Categoryvalidation/<category>.md
pr-changesNoneSingle file only

All codecs accept generateDetailFiles: false to produce compact single-file output:

const codec = createPatternsCodec({ generateDetailFiles: false });
// Only produces PATTERNS.md, no patterns/*.md files

The detailLevel option controls output verbosity:

ValueBehavior
"summary"Minimal output, key metrics only
"standard"Default with all sections
"detailed"Maximum detail, all optional sections

Key Files:

  • src/scanner/pattern-scanner.ts - File discovery and opt-in detection
  • src/scanner/ast-parser.ts - TypeScript AST parsing

Note: The scanner uses RegexBuilders from configuration to detect tags. The examples below use @libar-docs-* (DDD_ES_CQRS_PRESET). For other prefixes, substitute accordingly.

Annotation Format:

/**
* @libar-docs // Required opt-in (file level)
* @libar-docs-core @libar-docs-infra // Category tags
* @libar-docs-pattern MyPatternName // Pattern name
* @libar-docs-status completed // Status: roadmap|active|completed|deferred
* @libar-docs-phase 14 // Roadmap phase number
* @libar-docs-uses OtherPattern, Another // Dependencies (CSV)
* @libar-docs-usecase "When doing X" // Use cases (repeatable)
* @libar-docs-convention fsm-rules // Convention tag (CSV, links to decisions)
* @libar-docs-extract-shapes * // Auto-shape discovery (wildcard = all exports)
*
* ## Pattern Description // Markdown description
*
* Detailed description of the pattern...
*/

Declaration-Level Shape Tagging:

Individual declarations can be tagged with @libar-docs-shape in their JSDoc, without requiring a file-level @libar-docs-extract-shapes tag:

/**
* @libar-docs-shape api-types
* Configuration for the delivery process pipeline.
*/
export interface PipelineConfig { ... }

The optional value (e.g., api-types) sets the shape’s group field, enabling ShapeSelector filtering by group in reference codecs.

Tag Registry: Defines categories, priorities, and metadata formats. Source: src/taxonomy/ TypeScript modules.

Key Files:

  • src/scanner/gherkin-scanner.ts - Feature file discovery
  • src/scanner/gherkin-ast-parser.ts - Cucumber Gherkin parsing

Annotation Format:

@libar-docs-pattern:MyPattern @libar-docs-phase:15 @libar-docs-status:roadmap
@libar-docs-quarter:Q1-2025 @libar-docs-effort:2w @libar-docs-team:platform
@libar-docs-depends-on:OtherPattern @libar-docs-enables:NextPattern
@libar-docs-product-area:Generators @libar-docs-user-role:Developer
@libar-docs-release:v0.1.0
Feature: My Pattern Implementation
Background:
Given the following deliverables:
| Deliverable | Status |
| Core implementation | completed |
| Tests | active |
@acceptance-criteria
Scenario: Basic usage
When user does X
Then Y happens

Data-Driven Tag Extraction:

The Gherkin parser uses a data-driven approach — a TAG_LOOKUP map is built from buildRegistry().metadataTags at module load. For each tag, the registry definition provides: format (number/enum/csv/flag/value/quoted-value), optional transforms (hyphenToSpace, padAdr, stripQuotes), and the target metadataKey. Adding new Gherkin tags requires only a registry definition — no parser code changes.

Tag Mapping:

Gherkin TagExtractedPattern Field
@libar-docs-pattern:NamepatternName
@libar-docs-phase:Nphase
@libar-docs-status:*status
@libar-docs-quarter:*quarter
@libar-docs-release:*release
@libar-docs-depends-on:*dependsOn
@libar-docs-product-area:*productArea
@libar-docs-convention:*convention
@libar-docs-discovered-gap:*discoveredGaps

All codecs normalize status to three canonical values:

Input StatusNormalized To
"completed""completed"
"active""active"
"roadmap", "deferred", or undefined"planned"

All operations return Result<T, E> for explicit error handling:

types/result.ts
type Result<T, E> = { ok: true; value: T } | { ok: false; error: E };
// Usage
const result = await scanPatterns(options);
if (result.ok) {
const { files } = result.value;
} else {
console.error(result.error); // Explicit error handling
}

Benefits:

  • No exception swallowing
  • Partial success scenarios supported
  • Type-safe error handling at boundaries

Types are defined as Zod schemas first, TypeScript types inferred:

src/validation-schemas/extracted-pattern.ts
export const ExtractedPatternSchema = z
.object({
id: PatternIdSchema,
name: z.string().min(1),
category: CategoryNameSchema,
status: PatternStatusSchema.optional(),
phase: z.number().int().positive().optional(),
// ... 30+ fields
})
.strict();
export type ExtractedPattern = z.infer<typeof ExtractedPatternSchema>;

Benefits:

  • Runtime validation at all boundaries
  • Type inference from schemas (single source of truth)
  • Codec support for transformations

Data-driven configuration for pattern categorization:

// Generated from TypeScript taxonomy (src/taxonomy/)
{
"categories": [
{ "tag": "core", "domain": "Core", "priority": 1, "description": "Core patterns" },
{ "tag": "scanner", "domain": "Scanner", "priority": 10, "aliases": ["scan"] },
{ "tag": "generator", "domain": "Generator", "priority": 20, "aliases": ["gen"] }
],
"metadataTags": [
{ "tag": "status", "format": "enum", "values": ["roadmap", "active", "completed", "deferred"] },
{ "tag": "phase", "format": "number" },
{ "tag": "release", "format": "value" },
{ "tag": "usecase", "format": "quoted-value", "repeatable": true }
]
}

Category Inference Algorithm:

  1. Extract tag parts (e.g., @libar-docs-core-utils["core", "utils"])
  2. Find matching categories in registry (with aliases)
  3. Select highest priority (lowest number)
  4. Fallback to “uncategorized”

┌─────────────────────────────────────────────────────────────────────────────────┐
│ ORCHESTRATOR │
│ │
│ ┌─────────────────────────────────────────────────────────────────────────────┐│
│ │ Step 1: Load Tag Registry ││
│ │ buildRegistry() → TagRegistry ││
│ └─────────────────────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────┐│
│ │ Step 2-3: Scan TypeScript Sources ││
│ │ scanPatterns() → extractPatterns() → ExtractedPattern[] ││
│ └─────────────────────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────┐│
│ │ Step 4-5: Scan Gherkin Sources ││
│ │ scanGherkinFiles() → extractPatternsFromGherkin() ││
│ └─────────────────────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────┐│
│ │ Step 6: Merge Patterns (with conflict detection) ││
│ │ mergePatterns(tsPatterns, gherkinPatterns) ││
│ └─────────────────────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────┐│
│ │ Step 7: Compute Hierarchy Children ││
│ │ computeHierarchyChildren() → patterns with children[] populated ││
│ └─────────────────────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────┐│
│ │ Step 8: Transform to MasterDataset (SINGLE PASS) ││
│ │ transformToMasterDataset({ patterns, tagRegistry, workflow }) ││
│ │ ││
│ │ Computes: byStatus, byPhase, byQuarter, byCategory, bySource, ││
│ │ counts, phaseCount, categoryCount, relationshipIndex ││
│ └─────────────────────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────┐│
│ │ Step 9: Run Codecs ││
│ │ for each generator: ││
│ │ doc = Codec.decode(masterDataset) ││
│ │ files = renderDocumentWithFiles(doc, outputPath) ││
│ └─────────────────────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────┐│
│ │ Step 10: Write Output Files ││
│ │ fs.writeFile() for each OutputFile ││
│ └─────────────────────────────────────────────────────────────────────────────┘│
│ │
└─────────────────────────────────────────────────────────────────────────────────┘

Steps 1-8 are also available via buildMasterDataset() from src/generators/pipeline/build-pipeline.ts. The orchestrator adds Steps 9-10 (codec execution and file writing).

buildMasterDataset(options)
Steps 1-8 (scan → extract → merge → transform)
Result<PipelineResult, PipelineError>
├── process-api CLI (mergeConflictStrategy: 'fatal')
│ └── query handlers consume dataset
├── validate-patterns CLI (mergeConflictStrategy: 'concatenate')
│ └── cross-source validation via relationshipIndex
└── orchestrator (inline pipeline, adds Steps 9-10)
├── Step 9: Codec execution → RenderableDocument[]
└── Step 10: File writing → OutputFile[]
┌─────────────────────────────────────┐
│ MasterDataset │
│ │
│ patterns: ExtractedPattern[] │
│ tagRegistry: TagRegistry │
└─────────────────┬───────────────────┘
┌───────────────────────────────┼───────────────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ byStatus │ │ byPhase │ │ byQuarter │
│ │ │ │ │ │
│ .completed[] │ │ [0] phaseNumber: 1 │ │ "Q4-2024": [...] │
│ .active[] │ │ patterns[] │ │ "Q1-2025": [...] │
│ .planned[] │ │ counts │ │ "Q2-2025": [...] │
└─────────────────────┘ │ │ └─────────────────────┘
│ [1] phaseNumber: 14 │
┌─────────────────│ patterns[] │───────────────────┐
│ │ counts │ │
▼ └─────────────────────┘ ▼
┌─────────────────────┐ ┌─────────────────────┐
│ byCategory │ │ bySource │
│ │ │ │
│ "core": [...] │ │ .typescript[] │
│ "scanner": [...] │ │ .gherkin[] │
│ "generator": [...] │ │ .roadmap[] │
└─────────────────────┘ │ .prd[] │
└─────────────────────┘
│ │
└───────────────────────┬───────────────────────────────┘
┌─────────────────────────────┐
│ Aggregate Statistics │
│ │
│ counts: { completed: 45, │
│ active: 12, │
│ planned: 38, │
│ total: 95 } │
│ │
│ phaseCount: 15 │
│ categoryCount: 9 │
└─────────────────────────────┘
┌─────────────────────────────┐
│ MasterDataset │
└──────────────┬──────────────┘
┌──────────────────────────┼──────────────────────────┐
│ │ │
▼ ▼ ▼
┌───────────────────┐ ┌───────────────────┐ ┌───────────────────┐
│ PatternsCodec │ │ RoadmapCodec │ │ SessionCodec │
│ .decode() │ │ .decode() │ │ .decode() │
└─────────┬─────────┘ └─────────┬─────────┘ └─────────┬─────────┘
│ │ │
▼ ▼ ▼
┌───────────────────┐ ┌───────────────────┐ ┌───────────────────┐
│RenderableDocument │ │RenderableDocument │ │RenderableDocument │
│ │ │ │ │ │
│ title: "Patterns" │ │ title: "Roadmap" │ │ title: "Session" │
│ sections: [ │ │ sections: [ │ │ sections: [ │
│ heading(...), │ │ heading(...), │ │ heading(...), │
│ table(...), │ │ list(...), │ │ paragraph(...), │
│ link-out(...) │ │ mermaid(...) │ │ collapsible() │
│ ] │ │ ] │ │ ] │
│ │ │ │ │ │
│ additionalFiles: │ │ additionalFiles: │ │ additionalFiles: │
│ { "patterns/ │ │ { "phases/ │ │ { "sessions/ │
│ core.md": ... }│ │ phase-14.md" } │ │ phase-15.md" } │
└───────────────────┘ └───────────────────┘ └───────────────────┘
│ │ │
└───────────────────────┼───────────────────────┘
┌─────────────────────────────┐
│ renderToMarkdown() │
│ │
│ Traverses blocks: │
│ heading → ## Title │
│ table → | col | col | │
│ list → - item │
│ code → ```lang │
│ mermaid → ```mermaid │
│ link-out → [See ...](path)│
└─────────────────────────────┘

Use planning codecs to prepare for implementation:

import { createSessionPlanCodec, createPlanningChecklistCodec } from '@libar-dev/delivery-process';
// Generate planning documents
const planCodec = createSessionPlanCodec({
statusFilter: ['planned'],
includeAcceptanceCriteria: true,
});
const checklistCodec = createPlanningChecklistCodec({
forActivePhases: false,
forNextActionable: true,
});

Output documents:

  • SESSION-PLAN.md - What to implement
  • PLANNING-CHECKLIST.md - Pre-flight verification

Use session context and PR changes for active development:

import { createSessionContextCodec, createPrChangesCodec } from '@libar-dev/delivery-process';
// Current session context
const sessionCodec = createSessionContextCodec({
includeAcceptanceCriteria: true,
includeDependencies: true,
});
// PR-scoped changes
const prCodec = createPrChangesCodec({
changedFiles: getChangedFiles(), // from git
includeReviewChecklist: true,
});

Output documents:

  • SESSION-CONTEXT.md - Current focus and blocked items
  • working/PR-CHANGES.md - PR review context

Use milestone and changelog codecs for release documentation:

import { createMilestonesCodec, createChangelogCodec } from '@libar-dev/delivery-process';
// Quarter-filtered milestones
const milestonesCodec = createMilestonesCodec({
filterQuarters: ['Q1-2026'],
});
// Changelog with release tagging
const changelogCodec = createChangelogCodec({
includeUnreleased: false,
});

Output documents:

  • COMPLETED-MILESTONES.md - What shipped
  • CHANGELOG.md - Release notes

For AI agents or session handoffs:

import {
createSessionContextCodec,
createRemainingWorkCodec,
createCurrentWorkCodec,
} from '@libar-dev/delivery-process';
// Full session context bundle
const sessionCodec = createSessionContextCodec({
includeHandoffContext: true,
includeRelatedPatterns: true,
});
const remainingCodec = createRemainingWorkCodec({
includeNextActionable: true,
maxNextActionable: 10,
groupPlannedBy: 'priority',
});
const currentCodec = createCurrentWorkCodec({
includeDeliverables: true,
includeProcess: true,
});

Output documents:

  • SESSION-CONTEXT.md - Where we are
  • REMAINING-WORK.md - What’s left
  • CURRENT-WORK.md - What’s in progress

import { createPatternsCodec, type MasterDataset } from '@libar-dev/delivery-process';
import { renderToMarkdown } from '@libar-dev/delivery-process/renderable';
// Create custom codec
const codec = createPatternsCodec({
filterCategories: ['core'],
generateDetailFiles: false,
});
// Transform dataset
const document = codec.decode(masterDataset);
// Render to markdown
const markdown = renderToMarkdown(document);
import { generateDocument, type DocumentType } from '@libar-dev/delivery-process/renderable';
// Generate with default options
const files = generateDocument('patterns', masterDataset);
// files is OutputFile[]
for (const file of files) {
console.log(`${file.path}: ${file.content.length} bytes`);
}

The RenderableDocument includes detail files in additionalFiles:

const document = PatternsDocumentCodec.decode(dataset);
// Main content
console.log(document.title); // "Pattern Registry"
console.log(document.sections.length);
// Detail files (for progressive disclosure)
if (document.additionalFiles) {
for (const [path, subDoc] of Object.entries(document.additionalFiles)) {
console.log(`Detail file: ${path}`);
console.log(` Title: ${subDoc.title}`);
}
}

import { z } from 'zod';
import { MasterDatasetSchema, type MasterDataset } from '../validation-schemas/master-dataset';
import { type RenderableDocument, document, heading, paragraph } from '../renderable/schema';
import { RenderableDocumentOutputSchema } from '../renderable/codecs/shared-schema';
// Define options
interface MyCodecOptions {
includeCustomSection?: boolean;
}
// Create factory
export function createMyCodec(options?: MyCodecOptions) {
const opts = { includeCustomSection: true, ...options };
return z.codec(MasterDatasetSchema, RenderableDocumentOutputSchema, {
decode: (dataset: MasterDataset): RenderableDocument => {
const sections = [
heading(2, 'Summary'),
paragraph(`Total patterns: ${dataset.counts.total}`),
];
if (opts.includeCustomSection) {
sections.push(heading(2, 'Custom Section'));
sections.push(paragraph('Custom content here'));
}
return document('My Custom Document', sections, {
purpose: 'Custom document purpose',
});
},
encode: () => {
throw new Error('MyCodec is decode-only');
},
});
}
import { generatorRegistry } from '@libar-dev/delivery-process/generators';
import { createCodecGenerator } from '@libar-dev/delivery-process/generators/codec-based';
// Register if using existing document type
generatorRegistry.register(createCodecGenerator('my-patterns', 'patterns'));
// Or create custom generator class for new codec
class MyCustomGenerator implements DocumentGenerator {
readonly name = 'my-custom';
readonly description = 'My custom generator';
generate(patterns, context) {
const codec = createMyCodec();
const doc = codec.decode(context.masterDataset);
const files = renderDocumentWithFiles(doc, 'MY-CUSTOM.md');
return Promise.resolve({ files });
}
}
generatorRegistry.register(new MyCustomGenerator());

CodecGenerator NameCLI Flag
PatternsDocumentCodecpatterns-g patterns
RoadmapDocumentCodecroadmap-g roadmap
CompletedMilestonesCodecmilestones-g milestones
CurrentWorkCodeccurrent-g current
RequirementsDocumentCodecrequirements-g requirements
SessionContextCodecsession-g session
RemainingWorkCodecremaining-g remaining
PrChangesCodecpr-changes-g pr-changes
AdrDocumentCodecadrs-g adrs
PlanningChecklistCodecplanning-checklist-g planning-checklist
SessionPlanCodecsession-plan-g session-plan
SessionFindingsCodecsession-findings-g session-findings
ChangelogCodecchangelog-g changelog
TraceabilityCodectraceability-g traceability
OverviewCodecoverview-rdm-g overview-rdm
BusinessRulesCodecbusiness-rules-g business-rules
ArchitectureDocumentCodecarchitecture-g architecture
TaxonomyDocumentCodectaxonomy-g taxonomy
ValidationRulesCodecvalidation-rules-g validation-rules
ReferenceCodecreference-sample-g reference-sample
DecisionDocGeneratordoc-from-decision-g doc-from-decision
Terminal window
# Single generator
generate-docs -i "src/**/*.ts" -g patterns -o docs
# Multiple generators
generate-docs -i "src/**/*.ts" -g patterns -g roadmap -g session -o docs
# List available generators
generate-docs --list-generators
// Status filters
filterStatus: ['completed']; // Historical only
filterStatus: ['active', 'planned']; // Future work
filterStatus: []; // All (default)
// Phase filters
filterPhases: [14, 15, 16]; // Specific phases
filterPhases: []; // All (default)
// Category filters
filterCategories: ['core', 'ddd']; // Specific categories
filterCategories: []; // All (default)
// Quarter filters
filterQuarters: ['Q1-2026']; // Specific quarter
filterQuarters: []; // All (default)
// Compact single-file output
{ generateDetailFiles: false, detailLevel: "summary" }
// Standard with progressive disclosure
{ generateDetailFiles: true, detailLevel: "standard" }
// Maximum detail
{ generateDetailFiles: true, detailLevel: "detailed" }


ComponentFilePurpose
MasterDataset Schemasrc/validation-schemas/master-dataset.tsCentral data structure
transformToMasterDatasetsrc/generators/pipeline/transform-dataset.tsSingle-pass transformation
Document Codecssrc/renderable/codecs/*.tsZod 4 codec implementations
Reference Codecsrc/renderable/codecs/reference.tsScoped reference documents
Composite Codecsrc/renderable/codecs/composite.tsMulti-codec assembly
Convention Extractorsrc/renderable/codecs/convention-extractor.tsConvention content extraction
Shape Matchersrc/renderable/codecs/shape-matcher.tsDeclaration-level filtering
Markdown Renderersrc/renderable/render.tsBlock → Markdown
Claude Context Renderersrc/renderable/render.tsLLM-optimized rendering
Orchestratorsrc/generators/orchestrator.tsPipeline coordination
TypeScript Scannersrc/scanner/pattern-scanner.tsTS AST parsing
Gherkin Scannersrc/scanner/gherkin-scanner.tsFeature file parsing
Pipeline Factorysrc/generators/pipeline/build-pipeline.tsShared 8-step pipeline for CLI consumers
Business Rules Querysrc/api/rules-query.tsRules domain query (from Gherkin Rule: blocks)
Business Rules Codecsrc/renderable/codecs/business-rules.tsBusiness rules from Gherkin Rule: blocks
Architecture Codecsrc/renderable/codecs/architecture.tsArchitecture diagrams from annotations
Taxonomy Codecsrc/renderable/codecs/taxonomy.tsTaxonomy reference documentation
Validation Rules Codecsrc/renderable/codecs/validation-rules.tsProcess Guard validation rules reference
Decision Doc Generatorsrc/generators/built-in/decision-doc-generator.tsADR/PDR decision documents
Shape Extractorsrc/extractor/shape-extractor.tsShape extraction from TS