This was the root cause of losing good models during training. The model could learn to lap at step 30k then drift to a worse policy by step 90k, and we only ever saved the final weights. Changes to train_multitrack(): - Tracks best_segment_reward across all segments - Saves best_model.zip whenever a new high score is achieved - At end of training, RELOADS best_model.zip before returning so the caller always gets the best policy found, not the drift Both files saved per trial: model.zip <- latest checkpoint (crash recovery) best_model.zip <- best policy seen during training (used for eval) Agent: pi Tests: 102 passed Tests-Added: 0 TypeScript: N/A |
||
|---|---|---|
| .harness | ||
| agent | ||
| docs | ||
| tests | ||
| .gitignore | ||
| AGENT.md | ||
| DECISIONS.md | ||
| IMPLEMENTATION_PLAN.md | ||
| PROJECT-KICKOFF.md | ||
| PROJECT-SPEC.md | ||
| README.md | ||
| create_gitea_repo.py | ||
| ralph-loop.sh | ||
README.md
donkeycar-rl-autoresearch
Purpose
Status
- Scaffolded with the agent harness
- Spec not filled yet
Runbook
- Fill PROJECT-SPEC.md
- Create IMPLEMENTATION_PLAN.md from the spec
- Start the implementation loop