ROOT CAUSE: donkey_sim.py calc_reward() uses forward_vel = dot(heading, velocity). A spinning car ALWAYS has forward_vel > 0 (always moving 'forward' relative to its own heading), so it earned positive reward indefinitely while circling. v3 WAS INSUFFICIENT: v3 applied efficiency only to the speed BONUS: original × (1 + speed×eff×scale) But 'original' from sim was still exploitable: CTE≈0 while spinning → original=1.0/step Efficiency killed the speed bonus but not the base reward. 47k-step run: spinning = 1.0/step × 47k = 47k reward (never crashes in circle) v4 FIX — base × efficiency × speed: reward = (1 - abs(cte)/max_cte) × efficiency × (1 + speed_scale × speed) Completely ignores sim's bogus forward_vel reward. Spinning (eff≈0): reward ≈ 0 regardless of CTE or speed. ALL three terms must be high to earn reward — cannot be gamed. Key new test: test_circling_at_zero_cte_gives_near_zero_reward Worst-case exploit (CTE=0 spinning) → avg reward < 0.15 (was 1.0 in v3) forward_beats_circling_by_3x confirmed. Also: update Phase 2 autoresearch timesteps test, research log updated. Agent: pi/claude-sonnet Tests: 40/40 passing Tests-Added: +1 (core v4 circling guarantee) TypeScript: N/A |
||
|---|---|---|
| .harness | ||
| agent | ||
| docs | ||
| tests | ||
| .gitignore | ||
| AGENT.md | ||
| DECISIONS.md | ||
| IMPLEMENTATION_PLAN.md | ||
| PROJECT-KICKOFF.md | ||
| PROJECT-SPEC.md | ||
| README.md | ||
| create_gitea_repo.py | ||
| ralph-loop.sh | ||
README.md
donkeycar-rl-autoresearch
Purpose
Status
- Scaffolded with the agent harness
- Spec not filled yet
Runbook
- Fill PROJECT-SPEC.md
- Create IMPLEMENTATION_PLAN.md from the spec
- Start the implementation loop