donkeycar-rl-autoresearch/agent/outerloop-results
Paul Huliganga c8c17e2e46 wave3: autoresearch trial 25 results
Agent: pi
Tests: N/A
Tests-Added: 0
TypeScript: N/A
2026-04-16 20:01:51 -04:00
..
model-000 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-001 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-002 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
model-003 Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
autoresearch_log.txt AUTORESEARCH: 300 total trials complete - best mean_reward=141.85 at n_steer=8, n_throttle=5, lr=0.00202 2026-04-13 01:56:06 -04:00
autoresearch_phase1_log.txt milestone: Phase 1 complete — genuine driving confirmed; launch Phase 2 corner learning 2026-04-13 19:33:06 -04:00
autoresearch_phase1_log_CORRUPTED_circular_driving.txt fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
autoresearch_phase1_log_CORRUPTED_reward_hacking.txt fix: hack-proof reward shaping + reward hacking detection + research log 2026-04-13 12:27:48 -04:00
autoresearch_phase2_log.txt feat: shuttle-exploit detection in mini_monaco eval 2026-04-16 17:29:30 -04:00
autoresearch_phase3_log.txt feat: shuttle-exploit detection in mini_monaco eval 2026-04-16 17:29:30 -04:00
autoresearch_phase4_log.txt wave3: autoresearch trial 25 results 2026-04-16 20:01:51 -04:00
autoresearch_results.jsonl AUTORESEARCH: 300 total trials complete - best mean_reward=141.85 at n_steer=8, n_throttle=5, lr=0.00202 2026-04-13 01:56:06 -04:00
autoresearch_results_phase1.jsonl autoresearch: phase1 trial 50 results 2026-04-13 19:17:56 -04:00
autoresearch_results_phase1_CORRUPTED_circular_driving.jsonl fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
autoresearch_results_phase1_CORRUPTED_reward_hacking.jsonl fix: hack-proof reward shaping + reward hacking detection + research log 2026-04-13 12:27:48 -04:00
autoresearch_results_phase2.jsonl autoresearch: phase1 trial 20 results 2026-04-14 04:35:45 -04:00
autoresearch_results_phase3.jsonl Wave 4: scratch training on generated_track + mountain_track, zero-shot mini_monaco 2026-04-14 22:40:38 -04:00
autoresearch_results_phase3_CONTAMINATED_v2.jsonl fix: complete LR override — must patch lr_schedule, not just param_groups 2026-04-14 21:27:43 -04:00
autoresearch_results_phase3_CONTAMINATED_wrong_lr.jsonl fix: LR override was not reaching the optimizer — all trials ran at 0.000225 2026-04-14 20:37:48 -04:00
autoresearch_results_phase4.jsonl wave3: autoresearch trial 25 results 2026-04-16 20:01:51 -04:00
clean_sweep_results.jsonl AUTORESEARCH: Full Karpathy-style GP+UCB meta-controller, clean base data, fixed all paths, ready to run 2026-04-13 00:52:00 -04:00
eval_summary.jsonl fix: track switching via unwrapped viewer.exit_scene() — automatic scene changes work 2026-04-14 10:04:15 -04:00
multitrack_results.jsonl results: complete multi-track generalization baseline — 1/10 tracks drivable pre-Wave3 2026-04-14 11:31:08 -04:00
nohup_outerloop.log Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
outer_monitor.log Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00
sweep_results.jsonl Initial commit: stable RL sweep runner, legacy and new scripts, full docs included 2026-04-12 22:57:50 -04:00