donkeycar-rl-autoresearch/agent/outerloop-results
Paul Huliganga 7b8830f0cb milestone: Phase 1 complete — genuine driving confirmed; launch Phase 2 corner learning
PHASE 1 MILESTONE:
- Champion model drives the track for 599 steps (mean_reward=1022.78, std=0.45)
- Path efficiency 96-100% throughout — genuine forward motion confirmed
- Navigates first right-hand curve successfully
- Fails at S-curve (right->left) at step ~560: speed too high for tight corners
- Root cause: only 4787 training timesteps — model never sees S-curve enough to learn it

PHASE 2 CONFIG (corner learning):
- timesteps: 10,000-50,000 (10x more — model must experience S-curve many times)
- learning_rate: 0.00005-0.002 (tightened around Phase 1 winning region)
- eval_episodes: 5 (more reliable corner stats)
- JOB_TIMEOUT: 3600s (50k steps on CPU needs time)
- Results: autoresearch_results_phase2.jsonl (clean separation from Phase 1)

Research documentation:
- Phase 1 milestone added to docs/RESEARCH_LOG.md
- Full trajectory analysis: start -> first corner -> S-curve crash position logged
- Reward shaping v3 path efficiency victory documented
- evaluate_champion.py added for visual + diagnostic evaluation

Agent: pi/claude-sonnet
Tests: 40/40 passing
Tests-Added: 0
TypeScript: N/A
2026-04-13 19:33:06 -04:00
..
model-000 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-001 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-002 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-003 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
autoresearch_log.txt AUTORESEARCH: 300 total trials complete - best mean_reward=141.85 at n_steer=8, n_throttle=5, lr=0.00202 2026-04-13 01:56:06 -04:00
autoresearch_phase1_log.txt milestone: Phase 1 complete — genuine driving confirmed; launch Phase 2 corner learning 2026-04-13 19:33:06 -04:00
autoresearch_phase1_log_CORRUPTED_circular_driving.txt fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
autoresearch_phase1_log_CORRUPTED_reward_hacking.txt fix: hack-proof reward shaping + reward hacking detection + research log 2026-04-13 12:27:48 -04:00
autoresearch_results.jsonl AUTORESEARCH: 300 total trials complete - best mean_reward=141.85 at n_steer=8, n_throttle=5, lr=0.00202 2026-04-13 01:56:06 -04:00
autoresearch_results_phase1.jsonl autoresearch: phase1 trial 50 results 2026-04-13 19:17:56 -04:00
autoresearch_results_phase1_CORRUPTED_circular_driving.jsonl fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
autoresearch_results_phase1_CORRUPTED_reward_hacking.jsonl fix: hack-proof reward shaping + reward hacking detection + research log 2026-04-13 12:27:48 -04:00
clean_sweep_results.jsonl AUTORESEARCH: Full Karpathy-style GP+UCB meta-controller, clean base data, fixed all paths, ready to run 2026-04-13 00:52:00 -04:00
nohup_outerloop.log Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
outer_monitor.log Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
sweep_results.jsonl Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00