donkeycar-rl-autoresearch/.harness/templates/EXECUTION-BOARD-TEMPLATE.md

3.9 KiB
Raw Blame History

Execution Board Template

The execution board is the contract for a stream. Copy this file into .harness/<stream-name>/execution-board.md. The entire board must be written BEFORE any code is committed. Plan-then-implement is non-negotiable.


Execution Board — [STREAM NAME]

Feature: [One-line description of what this stream builds] Created: YYYY-MM-DD Branch: feat/<stream-name> IMPLEMENTATION_PLAN tasks: [e.g., 58] Status: 🔴 Not started | 🟡 Planning | 🟠 In progress | Complete Design reference: [path to design doc, or N/A]


🎯 Goal

[24 sentences. What does this stream accomplish? What user-facing outcome does it produce? How does it fit the larger product vision?]


⚠️ Dependencies

[Other streams or tasks that must be complete before this one can start. If none: "None — can start immediately."]


📦 Packets

Packet [XX-01] — [Name]

IMPLEMENTATION_PLAN task: [N] Status: Not started | 🔄 In progress | Done Est. effort: [N sessions] Depends on: [XX-00 or "none"]

Goal: [One sentence]

Steps:

  1. [Concrete step]
  2. [...]

Files created/modified:

  • src/... — [description]
  • src/.../__tests__/... — [test file]

Known-answer tests (mandatory for calculation modules):

test('[what is being verified]', () => {
  // Source: [official reference — URL, standard, specification]
  expect(fn(input)).toBeCloseTo(expected, precision);
});

Acceptance criteria:

  • [Programmatically verifiable criterion]
  • Known-answer test passes
  • Full test suite green (count ≥ baseline)
  • TypeScript: clean (npx tsc --noEmit outputs nothing)
  • Documentation updated (if user-facing): user guide, feature overview, or design doc reflects this change

Validation evidence: .harness/<stream>/validation/<xx-01>-validation.md


Packet [XX-02] — [Name]

[Repeat above structure for each packet]


🔢 Dependency Order

[XX-01] → [XX-02] → [XX-04 (UI/integration)]
[XX-01] → [XX-03] → [XX-04 (UI/integration)]

[Which packets can run in parallel? Which must be sequential?]


🏁 Stream Completion Criteria

  • All packets complete with validation evidence written
  • All known-answer tests pass (list them here explicitly)
  • Full test suite green
  • TypeScript: clean
  • Regression baseline saved: .harness/regression-baselines/<stream>-baseline.json
  • Branch merged to main via --no-ff merge commit
  • Process eval written: .harness/<stream>/process-eval.md
  • IMPLEMENTATION_PLAN tasks marked [x]
  • EXECUTION_MASTER.md (or project equivalent) updated
  • Documentation updated: any user-facing feature changes are reflected in the relevant user guide, features overview, database schema doc, or design doc (whichever applies)

📋 Mandatory Commit Trailer Format

Every implementation commit in this stream:

feat(<stream-name>): <XX-NN> — <description>

Agent: <model-name>
Tests: <before> → <after>
Tests-Added: <N>
TypeScript: clean

🔍 Pre-Coding Checklist

Before writing any implementation code:

  • This execution board is fully written (all packets defined)
  • Branch created from latest main
  • Baseline test count verified
  • No open schema migrations from other active streams (if relevant)
  • Design reference doc has been read