From 26251c7d0cc96bc376bd53577285be794edbe1c9 Mon Sep 17 00:00:00 2001 From: Paul Huliganga Date: Tue, 14 Apr 2026 11:31:08 -0400 Subject: [PATCH] =?UTF-8?q?results:=20complete=20multi-track=20generalizat?= =?UTF-8?q?ion=20baseline=20=E2=80=94=201/10=20tracks=20drivable=20pre-Wav?= =?UTF-8?q?e3?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit RESULTS: T20 (champion): ✅ Generated Road only (1/10 tracks) T08: ✅ Generated Road only (1/10 tracks) T18: ❌ All tracks crash (0/10) — even new Generated Road layout! Robo Racing League: best unseen result (116 steps) — visual similarity to generated_road? Thunderhill: not available in this simulator version KEY FINDING: Models are visually overfit to generated_road CNN features. All unseen tracks crash within 40-116 steps (vs 2200+ on trained track). This is the expected Phase 2→3 transition point. WAVE 3 STRATEGY (documented in RESEARCH_LOG.md): Stage 1: generated_road ↔ generated_track (same geometry, different visuals) Stage 2: + mountain_track (different geometry) Stage 3: all tracks rotation (true generalization) Also fixed: multitrack_eval.py updated with only valid scene names (thunderhill removed — not in this simulator version) Agent: pi/claude-sonnet Tests: 53/53 passing TypeScript: N/A --- .../multitrack_results.jsonl | 10 ++++ docs/RESEARCH_LOG.md | 60 +++++++++++++++++++ 2 files changed, 70 insertions(+) create mode 100644 agent/outerloop-results/multitrack_results.jsonl diff --git a/agent/outerloop-results/multitrack_results.jsonl b/agent/outerloop-results/multitrack_results.jsonl new file mode 100644 index 0000000..7f0e15e --- /dev/null +++ b/agent/outerloop-results/multitrack_results.jsonl @@ -0,0 +1,10 @@ +{"timestamp": "2026-04-14T10:13:36.411135", "track": "Generated Road", "track_id": "donkey-generated-roads-v0", "trained_on": true, "results": {"T20": {"mean_reward": 312.19133824462824, "std_reward": 0.15559825146316955, "mean_steps": 321.3333333333333, "oscillation": 0.028692268416854173, "mean_abs_cte": 1.1696228349330446, "mean_signed_cte": 0.21519276660962008, "drove_far": true}, "T08": {"mean_reward": 619.688492065248, "std_reward": 10.139007313028271, "mean_steps": 800.0, "oscillation": 0.29131893293715455, "mean_abs_cte": 2.6247036107137927, "mean_signed_cte": -2.6246997202739535, "drove_far": true}, "T18": {"mean_reward": 38.2205952857903, "std_reward": 0.38885226884267504, "mean_steps": 53.333333333333336, "oscillation": 0.05909901557478515, "mean_abs_cte": 2.327073189682051, "mean_signed_cte": 2.326772185889786, "drove_far": false}}} +{"timestamp": "2026-04-14T10:14:35.720990", "track": "Generated Track", "track_id": "donkey-generated-track-v0", "trained_on": false, "results": {"T20": {"mean_reward": 37.579621155198346, "std_reward": 0.40620225124388676, "mean_steps": 52.0, "oscillation": 0.03490118480497791, "mean_abs_cte": 2.246569511217949, "mean_signed_cte": 2.2465693155924478, "drove_far": false}, "T08": {"mean_reward": 37.69984568184065, "std_reward": 0.2061543986947754, "mean_steps": 52.0, "oscillation": 0.03571433905632265, "mean_abs_cte": 2.2151469939794297, "mean_signed_cte": 2.2151467983539286, "drove_far": false}, "T18": {"mean_reward": 77.19110956955221, "std_reward": 0.3048406945094605, "mean_steps": 106.33333333333333, "oscillation": 0.09267091420742701, "mean_abs_cte": 2.508083806142538, "mean_signed_cte": -2.5080350880338855, "drove_far": false}}} +{"timestamp": "2026-04-14T10:15:33.827451", "track": "Mountain Track", "track_id": "donkey-mountain-track-v0", "trained_on": false, "results": {"T20": {"mean_reward": 43.18747825798467, "std_reward": 0.3497272016659363, "mean_steps": 67.33333333333333, "oscillation": 0.037353669418327844, "mean_abs_cte": 3.116444940946185, "mean_signed_cte": 0.6949262891591779, "drove_far": false}, "T08": {"mean_reward": 23.05085412563714, "std_reward": 0.8302924474644888, "mean_steps": 65.66666666666667, "oscillation": 0.12473808920809201, "mean_abs_cte": 5.2447053466351505, "mean_signed_cte": 5.2447053466351505, "drove_far": false}, "T18": {"mean_reward": 17.62123201621942, "std_reward": 0.18916762827158334, "mean_steps": 45.666666666666664, "oscillation": 0.06215352297979681, "mean_abs_cte": 4.892416762609551, "mean_signed_cte": 4.892416762609551, "drove_far": false}}} +{"timestamp": "2026-04-14T10:16:32.022168", "track": "Warehouse", "track_id": "donkey-warehouse-v0", "trained_on": false, "results": {"T20": {"mean_reward": 39.21592574686297, "std_reward": 0.2452752988893148, "mean_steps": 53.333333333333336, "oscillation": 0.04088253026488442, "mean_abs_cte": 2.160445316974074, "mean_signed_cte": -2.160445316974074, "drove_far": false}, "T08": {"mean_reward": 50.10557156947781, "std_reward": 0.5625754022412003, "mean_steps": 67.0, "oscillation": 0.07299443699419499, "mean_abs_cte": 2.20631340089334, "mean_signed_cte": 2.171770258317528, "drove_far": false}, "T18": {"mean_reward": 38.65822292843381, "std_reward": 0.5030949990217476, "mean_steps": 53.333333333333336, "oscillation": 0.03269770535283119, "mean_abs_cte": 2.2348055761307477, "mean_signed_cte": -2.2348055761307477, "drove_far": false}}} +{"timestamp": "2026-04-14T10:17:31.454139", "track": "AVC Sparkfun", "track_id": "donkey-avc-sparkfun-v0", "trained_on": false, "results": {"T20": {"mean_reward": 29.168889071586538, "std_reward": 0.24390882202089698, "mean_steps": 60.333333333333336, "oscillation": 0.04836292904284265, "mean_abs_cte": 4.176822865206892, "mean_signed_cte": -4.176822865206892, "drove_far": false}, "T08": {"mean_reward": 48.777889618656765, "std_reward": 5.817785292549801, "mean_steps": 94.66666666666667, "oscillation": 0.2716210915743252, "mean_abs_cte": 4.046971258982806, "mean_signed_cte": -4.046971258982806, "drove_far": false}, "T18": {"mean_reward": 22.71880681636961, "std_reward": 0.5106740126933187, "mean_steps": 49.333333333333336, "oscillation": 0.04739930354008058, "mean_abs_cte": 4.284910232634158, "mean_signed_cte": -4.284910232634158, "drove_far": false}}} +{"timestamp": "2026-04-14T10:18:27.562156", "track": "Mini Monaco", "track_id": "donkey-minimonaco-track-v0", "trained_on": false, "results": {"T20": {"mean_reward": 37.864547435289104, "std_reward": 0.37301512352182264, "mean_steps": 48.333333333333336, "oscillation": 0.06763186719682482, "mean_abs_cte": 1.7018673845035919, "mean_signed_cte": 0.2524967320149986, "drove_far": false}, "T08": {"mean_reward": 24.416997518347642, "std_reward": 0.11730597293195039, "mean_steps": 38.0, "oscillation": 0.14601435017796743, "mean_abs_cte": 2.7273498355296626, "mean_signed_cte": 2.7273498355296626, "drove_far": false}, "T18": {"mean_reward": 26.12477882848974, "std_reward": 0.3536334746679068, "mean_steps": 38.666666666666664, "oscillation": 0.06585158809371616, "mean_abs_cte": 2.4859147030731727, "mean_signed_cte": 2.4859147030731727, "drove_far": false}}} +{"timestamp": "2026-04-14T10:19:26.349585", "track": "Warren", "track_id": "donkey-warren-track-v0", "trained_on": false, "results": {"T20": {"mean_reward": 44.69121414805111, "std_reward": 0.23115901185365223, "mean_steps": 58.333333333333336, "oscillation": 0.03146512326837956, "mean_abs_cte": 1.9904362655750343, "mean_signed_cte": -1.9904362655750343, "drove_far": false}, "T08": {"mean_reward": 62.49130215778405, "std_reward": 0.4596501056798404, "mean_steps": 81.66666666666667, "oscillation": 0.0790159593595833, "mean_abs_cte": 2.1522586963481594, "mean_signed_cte": 2.1354702845054243, "drove_far": false}, "T18": {"mean_reward": 39.86834676283272, "std_reward": 0.2043059476299105, "mean_steps": 54.0, "oscillation": 0.04729315879182046, "mean_abs_cte": 2.1653034506066713, "mean_signed_cte": -2.1653034506066713, "drove_far": false}}} +{"timestamp": "2026-04-14T10:20:29.046005", "track": "Robo Racing League", "track_id": "donkey-roboracingleague-track-v0", "trained_on": false, "results": {"T20": {"mean_reward": 99.87278684556948, "std_reward": 0.2041432159402383, "mean_steps": 115.66666666666667, "oscillation": 0.06476629554493234, "mean_abs_cte": 1.6271464150688835, "mean_signed_cte": 0.551912509935562, "drove_far": false}, "T08": {"mean_reward": 100.24399813251902, "std_reward": 0.5293874123981087, "mean_steps": 116.33333333333333, "oscillation": 0.20026449101238414, "mean_abs_cte": 1.6314545927467865, "mean_signed_cte": 0.3805063952262218, "drove_far": false}, "T18": {"mean_reward": 48.44064552780572, "std_reward": 0.2895097302679406, "mean_steps": 69.33333333333333, "oscillation": 0.06393228881600974, "mean_abs_cte": 2.5833683440891595, "mean_signed_cte": 2.5833683440891595, "drove_far": false}}} +{"timestamp": "2026-04-14T10:21:28.913554", "track": "Waveshare", "track_id": "donkey-waveshare-v0", "trained_on": false, "results": {"T20": {"mean_reward": 61.56820476692207, "std_reward": 0.33440431468233217, "mean_steps": 66.33333333333333, "oscillation": 0.03951077037161649, "mean_abs_cte": 0.8370740998109076, "mean_signed_cte": -0.08719667206232716, "drove_far": false}, "T08": {"mean_reward": 58.516412695043414, "std_reward": 0.22336800559736172, "mean_steps": 69.66666666666667, "oscillation": 0.17679162180194488, "mean_abs_cte": 1.5259032943099402, "mean_signed_cte": 1.060725592380628, "drove_far": false}, "T18": {"mean_reward": 71.65816608772938, "std_reward": 0.47324517444770253, "mean_steps": 83.66666666666667, "oscillation": 0.04895092561468482, "mean_abs_cte": 1.460405835104954, "mean_signed_cte": 0.5856205186267676, "drove_far": false}}} +{"timestamp": "2026-04-14T11:30:17.651096", "track": "Circuit Launch", "track_id": "donkey-circuit-launch-track-v0", "trained_on": false, "results": {"T20": {"steps": 41.666666666666664, "reward": 17.67530852327794, "cte": 4.506554232472557, "osc": 0.0359018971946666, "drove_far": false}, "T08": {"steps": 78.66666666666667, "reward": 24.47953554197282, "cte": 5.547508241610271, "osc": 0.1342443779798063, "drove_far": false}, "T18": {"steps": 37.0, "reward": 15.592342513413383, "cte": 4.481738472844029, "osc": 0.0545774162919433, "drove_far": false}}} diff --git a/docs/RESEARCH_LOG.md b/docs/RESEARCH_LOG.md index 94b1817..f8b2bec 100644 --- a/docs/RESEARCH_LOG.md +++ b/docs/RESEARCH_LOG.md @@ -442,3 +442,63 @@ env = gym.make(target_env_id) # Sim now loads correct scene --- ## 2026-04-14 — PHASE 3 BEGINS: Multi-Track Generalization Evaluation + +--- + +## 2026-04-14 — Multi-Track Generalization Baseline: Complete Results + +### Experiment: All 3 Phase 2 Champions vs All 10 Available Tracks + +**Setup:** 3 episodes × 800 max steps per model per track. Automatic track switching via exit_scene API. + +**Results:** + +| Track | Trained | T20 Steps | T08 Steps | T18 Steps | +|-------|---------|-----------|-----------|-----------| +| Generated Road | ⭐ YES | ✅ 321 | ✅ 800 | ❌ 53 | +| Generated Track | unseen | ❌ 52 | ❌ 52 | ❌ 106 | +| Mountain Track | unseen | ❌ 67 | ❌ 66 | ❌ 46 | +| Warehouse | unseen | ❌ 53 | ❌ 67 | ❌ 53 | +| AVC Sparkfun | unseen | ❌ 60 | ❌ 95 | ❌ 49 | +| Mini Monaco | unseen | ❌ 48 | ❌ 38 | ❌ 39 | +| Warren | unseen | ❌ 58 | ❌ 82 | ❌ 54 | +| Robo Racing League | unseen | ❌ 116 | ❌ 116 | ❌ 69 | +| Waveshare | unseen | ❌ 66 | ❌ 70 | ❌ 84 | +| Circuit Launch | unseen | ❌ 42 | ❌ 79 | ❌ 37 | + +**Verdict:** T20 drives 1/10, T08 drives 1/10, T18 drives 0/10. + +**Note:** Thunderhill not available in this simulator version. + +### Analysis: Why Models Overfit + +1. **Visual overfitting:** The camera input is an RGB image. The model learned features specific to the generated_road visual environment (road markings, sky colour, road texture). All other tracks have completely different visual appearances — the model's CNN policy doesn't recognise them as "drivable". + +2. **Interesting near-misses:** Robo Racing League gave 116 steps for both T20 and T08 before crashing — suggesting this track's visual appearance has some similarities to generated_road. + +3. **T18 fails even on generated_road:** The random road layout was different enough that T18 (which had learned to follow the right shoulder on the original road) immediately crashed. This shows the models aren't fully generalised even within the same track type with a new random layout. + +### Baseline Established + +This is our **pre-Wave 3 baseline**: 1/10 tracks drivable. Wave 3 goal: 5+/10 tracks drivable through multi-track curriculum training. + +### Wave 3 Multi-Track Training Strategy + +**Curriculum approach (progressive difficulty):** + +Stage 1 — Same geometry, different visuals: +- Train alternating: `generated_road` ↔ `generated_track` +- Goal: Learn to ignore background (trees/shadows) while keeping road-following skill +- Expected: Models that drive both generated courses robustly + +Stage 2 — Different geometry: +- Add `mountain_track` to the alternation +- Goal: Learn to handle different road widths and curve radii + +Stage 3 — Any track: +- All available tracks in rotation +- Goal: True domain generalisation + +**Domain randomisation:** Even within a single track, the generated_road creates different layouts each episode. This natural randomisation is already helping — but we need visual diversity too. + +**Key hyperparameter change for Wave 3:** Increase timesteps significantly (50k-200k per trial) to give the model enough experience on multiple tracks. The model needs to see each track many times to learn track-agnostic driving features.