Experiment 002 - LARS 2D Identity v2
Date: 2025-12-29 Status: Completed
Configuration
- Base Model: qwen2.5-7b-abliterated
- Dataset: lars_identity.json (20 Q&A pairs)
- Format: 2D (simple question/answer)
- Epochs: 10 (increased from 3)
- Learning Rate: 5e-4 (increased from 2e-4)
- Training Time: ~3 minutes
Results
- Loss: 3.81 → 0.05 (significant improvement)
- Identity Test: PARTIAL - Model knows Corlera, Chris, Nexus but still says 'I am Qwen' at start
- Conclusion: More epochs helped but base model identity still leaks through
Output
- Path: ~/corlera-training/outputs/lars-lora-v2
Comparison to EXP-001
- 10 epochs vs 3 epochs = much better loss
- Higher LR = faster convergence
- Still not overriding base identity fully