PPO.load() bakes lr_schedule=FloatSchedule(saved_lr) into the model. train() calls _update_learning_rate() which reads lr_schedule, not model.learning_rate. So even with param_groups patched, the first gradient step reverts the optimizer to the saved LR. Complete 3-part fix in create_or_load_model(): model.learning_rate = lr # attribute model.lr_schedule = get_schedule_fn(lr) # prevents train() reverting for pg in optimizer.param_groups: pg['lr'] = lr # immediate effect Also: - SEED_PARAMS: second seed now uses LR=0.001 (was 0.000225) so GP starts with real LR diversity instead of two identical seeds - tests/test_end_to_end.py: 13 new tests covering the full LR override path including a live learn() call; would have caught both bugs - Phase 3 results re-cleared (seed trial 1 ran with half-fix) - 96 tests total, all passing Agent: pi Tests: 96 passed Tests-Added: 13 TypeScript: N/A |
||
|---|---|---|
| .harness | ||
| agent | ||
| docs | ||
| tests | ||
| .gitignore | ||
| AGENT.md | ||
| DECISIONS.md | ||
| IMPLEMENTATION_PLAN.md | ||
| PROJECT-KICKOFF.md | ||
| PROJECT-SPEC.md | ||
| README.md | ||
| create_gitea_repo.py | ||
| ralph-loop.sh | ||
README.md
donkeycar-rl-autoresearch
Purpose
Status
- Scaffolded with the agent harness
- Spec not filled yet
Runbook
- Fill PROJECT-SPEC.md
- Create IMPLEMENTATION_PLAN.md from the spec
- Start the implementation loop