Documents all 25 trial results, known models, what is confirmed vs
unknown, and the 6 pending verification tests agreed with user.
Agent: pi
Tests: 102 passed
Tests-Added: 0
TypeScript: N/A
RESULTS:
T20 (champion): ✅ Generated Road only (1/10 tracks)
T08: ✅ Generated Road only (1/10 tracks)
T18: ❌ All tracks crash (0/10) — even new Generated Road layout!
Robo Racing League: best unseen result (116 steps) — visual similarity to generated_road?
Thunderhill: not available in this simulator version
KEY FINDING: Models are visually overfit to generated_road CNN features.
All unseen tracks crash within 40-116 steps (vs 2200+ on trained track).
This is the expected Phase 2→3 transition point.
WAVE 3 STRATEGY (documented in RESEARCH_LOG.md):
Stage 1: generated_road ↔ generated_track (same geometry, different visuals)
Stage 2: + mountain_track (different geometry)
Stage 3: all tracks rotation (true generalization)
Also fixed: multitrack_eval.py updated with only valid scene names
(thunderhill removed — not in this simulator version)
Agent: pi/claude-sonnet
Tests: 53/53 passing
TypeScript: N/A
PHASE 2 MILESTONE DOCUMENTED:
All 3 top models complete the full track with distinct driving styles:
- Trial 20 (n_steer=3): Right lane, stable steering — CHAMPION ✅
- Trial 8 (n_steer=4): Left/center lane, oscillating (still completes!)
- Trial 18 (n_steer=3): Right shoulder, very accurate line following
Key finding: fewer steering bins (n_steer=3) = better driving (counterintuitive)
CTE symmetry explains left/right preference: random NN init determines which side
BEHAVIORAL REWARD WRAPPERS (agent/behavioral_wrappers.py):
- LanePositionWrapper: target a specific CTE offset (control left/right preference)
- AntiOscillationWrapper: penalise rapid steering changes (fix Model 2 oscillation)
- AsymmetricCTEWrapper: enforce right-lane rule (penalise left-of-centre more)
- CombinedBehavioralWrapper: all three combined in one wrapper
ENHANCED EVALUATOR (agent/evaluate_champion.py):
- Full metrics: reward, lap time, oscillation score, CTE distribution, lane position
- --compare flag: runs all top Phase 2 models side by side with comparison table
- Saves eval summary to outerloop-results/eval_summary.jsonl
- Detects lap completion events from sim info dict
IMPLEMENTATION PLAN updated: Wave 3 streams defined
RESEARCH LOG updated: Phase 2 milestone, behavioral analysis, next steps
Champion updated to Trial 20 (Phase 2)
Agent: pi/claude-sonnet
Tests: 53/53 passing (+13 behavioral wrapper tests)
Tests-Added: +13
TypeScript: N/A
ROOT CAUSE:
donkey_sim.py calc_reward() uses forward_vel = dot(heading, velocity).
A spinning car ALWAYS has forward_vel > 0 (always moving 'forward' relative
to its own heading), so it earned positive reward indefinitely while circling.
v3 WAS INSUFFICIENT:
v3 applied efficiency only to the speed BONUS: original × (1 + speed×eff×scale)
But 'original' from sim was still exploitable: CTE≈0 while spinning → original=1.0/step
Efficiency killed the speed bonus but not the base reward.
47k-step run: spinning = 1.0/step × 47k = 47k reward (never crashes in circle)
v4 FIX — base × efficiency × speed:
reward = (1 - abs(cte)/max_cte) × efficiency × (1 + speed_scale × speed)
Completely ignores sim's bogus forward_vel reward.
Spinning (eff≈0): reward ≈ 0 regardless of CTE or speed.
ALL three terms must be high to earn reward — cannot be gamed.
Key new test: test_circling_at_zero_cte_gives_near_zero_reward
Worst-case exploit (CTE=0 spinning) → avg reward < 0.15 (was 1.0 in v3)
forward_beats_circling_by_3x confirmed.
Also: update Phase 2 autoresearch timesteps test, research log updated.
Agent: pi/claude-sonnet
Tests: 40/40 passing
Tests-Added: +1 (core v4 circling guarantee)
TypeScript: N/A