donkeycar-rl-autoresearch/agent/outerloop-results
Paul Huliganga c8a495dd22 fix: reward v4 — full sim bypass kills circular driving at root
ROOT CAUSE:
  donkey_sim.py calc_reward() uses forward_vel = dot(heading, velocity).
  A spinning car ALWAYS has forward_vel > 0 (always moving 'forward' relative
  to its own heading), so it earned positive reward indefinitely while circling.

v3 WAS INSUFFICIENT:
  v3 applied efficiency only to the speed BONUS: original × (1 + speed×eff×scale)
  But 'original' from sim was still exploitable: CTE≈0 while spinning → original=1.0/step
  Efficiency killed the speed bonus but not the base reward.
  47k-step run: spinning = 1.0/step × 47k = 47k reward (never crashes in circle)

v4 FIX — base × efficiency × speed:
  reward = (1 - abs(cte)/max_cte) × efficiency × (1 + speed_scale × speed)
  Completely ignores sim's bogus forward_vel reward.
  Spinning (eff≈0): reward ≈ 0 regardless of CTE or speed.
  ALL three terms must be high to earn reward — cannot be gamed.

Key new test: test_circling_at_zero_cte_gives_near_zero_reward
  Worst-case exploit (CTE=0 spinning) → avg reward < 0.15 (was 1.0 in v3)
  forward_beats_circling_by_3x confirmed.

Also: update Phase 2 autoresearch timesteps test, research log updated.

Agent: pi/claude-sonnet
Tests: 40/40 passing
Tests-Added: +1 (core v4 circling guarantee)
TypeScript: N/A
2026-04-13 20:56:32 -04:00
..
model-000 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-001 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-002 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-003 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
autoresearch_log.txt AUTORESEARCH: 300 total trials complete - best mean_reward=141.85 at n_steer=8, n_throttle=5, lr=0.00202 2026-04-13 01:56:06 -04:00
autoresearch_phase1_log.txt milestone: Phase 1 complete — genuine driving confirmed; launch Phase 2 corner learning 2026-04-13 19:33:06 -04:00
autoresearch_phase1_log_CORRUPTED_circular_driving.txt fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
autoresearch_phase1_log_CORRUPTED_reward_hacking.txt fix: hack-proof reward shaping + reward hacking detection + research log 2026-04-13 12:27:48 -04:00
autoresearch_phase2_log.txt fix: reward v4 — full sim bypass kills circular driving at root 2026-04-13 20:56:32 -04:00
autoresearch_results.jsonl AUTORESEARCH: 300 total trials complete - best mean_reward=141.85 at n_steer=8, n_throttle=5, lr=0.00202 2026-04-13 01:56:06 -04:00
autoresearch_results_phase1.jsonl autoresearch: phase1 trial 50 results 2026-04-13 19:17:56 -04:00
autoresearch_results_phase1_CORRUPTED_circular_driving.jsonl fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
autoresearch_results_phase1_CORRUPTED_reward_hacking.jsonl fix: hack-proof reward shaping + reward hacking detection + research log 2026-04-13 12:27:48 -04:00
autoresearch_results_phase2.jsonl fix: reward v4 — full sim bypass kills circular driving at root 2026-04-13 20:56:32 -04:00
clean_sweep_results.jsonl AUTORESEARCH: Full Karpathy-style GP+UCB meta-controller, clean base data, fixed all paths, ready to run 2026-04-13 00:52:00 -04:00
nohup_outerloop.log Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
outer_monitor.log Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
sweep_results.jsonl Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00