PPO.load() bakes lr_schedule=FloatSchedule(saved_lr) into the model.
train() calls _update_learning_rate() which reads lr_schedule, not
model.learning_rate. So even with param_groups patched, the first
gradient step reverts the optimizer to the saved LR.
Complete 3-part fix in create_or_load_model():
model.learning_rate = lr # attribute
model.lr_schedule = get_schedule_fn(lr) # prevents train() reverting
for pg in optimizer.param_groups: pg['lr'] = lr # immediate effect
Also:
- SEED_PARAMS: second seed now uses LR=0.001 (was 0.000225) so GP
starts with real LR diversity instead of two identical seeds
- tests/test_end_to_end.py: 13 new tests covering the full LR override
path including a live learn() call; would have caught both bugs
- Phase 3 results re-cleared (seed trial 1 ran with half-fix)
- 96 tests total, all passing
Agent: pi
Tests: 96 passed
Tests-Added: 13
TypeScript: N/A