Research-grade DonkeyCar RL autoresearch and sweep system.
Go to file
Paul Huliganga fcb6ea1ac2 fix: path-efficiency reward (v3) defeats circular driving exploit
CRITICAL BUG FIX — Circular Driving:
- v2 reward still hackable: car circles at starting line with low CTE + positive speed
- Confirmed in data: trial 5 mean_reward=4582, cv=0.0% (physically impossible for genuine driving)
- Statistical signature: cv <1% with high reward = consistent exploit, not genuine driving

ROOT CAUSE: Neither CTE nor raw speed can distinguish forward vs circular motion.
Both have: low CTE (on centerline) + positive speed (moving) = same reward.
Missing dimension: TRACK PROGRESS (net advance along track)

FIX — Path Efficiency Reward (v3):
  efficiency = net_displacement / total_path_length  (sliding window of 30 steps)
  shaped = original x (1 + speed_scale x speed x efficiency)
  - Forward driving: efficiency ≈ 1.0 → full speed bonus
  - Circular driving: efficiency ≈ 0.0 → speed bonus disappears
  - Cannot be hacked: circling means returning to same positions (low net_displacement)

Tests:
  - test_efficiency_near_zero_for_circular_driving: confirmed <0.2 efficiency for circles
  - test_efficiency_near_one_for_straight_driving: confirmed >0.90 for straight line
  - test_straight_driving_gets_higher_reward_than_circular: KEY guarantee
  - test_speed_bonus_disappears_when_circling: bonus suppressed after window fills

Research documentation:
  - Full analysis with data table added to docs/RESEARCH_LOG.md
  - cv% identified as reward hacking indicator
  - Archived circular data + models

Clean start: new autoresearch_results_phase1.jsonl, new champion dir

Agent: pi/claude-sonnet
Tests: 40/40 passing
Tests-Added: +6 (path efficiency, anti-circular)
TypeScript: N/A
2026-04-13 13:36:17 -04:00
.harness feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00
agent fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
docs fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
tests fix: path-efficiency reward (v3) defeats circular driving exploit 2026-04-13 13:36:17 -04:00
.gitignore feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00
AGENT.md feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00
DECISIONS.md feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00
IMPLEMENTATION_PLAN.md feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00
PROJECT-KICKOFF.md feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00
PROJECT-SPEC.md feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00
README.md feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00
create_gitea_repo.py Initial commit 2026-04-12 23:44:36 -04:00
ralph-loop.sh feat: Wave 1 complete — real PPO training, model save, GP+UCB autoresearch, 37 tests passing 2026-04-13 10:03:15 -04:00

README.md

donkeycar-rl-autoresearch

Purpose

Status

  • Scaffolded with the agent harness
  • Spec not filled yet

Runbook

  • Fill PROJECT-SPEC.md
  • Create IMPLEMENTATION_PLAN.md from the spec
  • Start the implementation loop