Training Approach: - LoRA (Low-Rank Adaptation) for efficient fine-tuning - PEFT library (not Unsloth - compatibility issues) - JSON datasets: [{"prompt": "...", "response": "..."}]
Key Datasets: - lars_identity.json - Core identity and personality - nexus_knowledge.json - Nexus system understanding - tool_use.json - MCP tool patterns
Training Command:
python ~/corlera-training/scripts/train.py \
--model /data/models/huggingface/qwen2.5-7b-abliterated \
--dataset ~/corlera-training/datasets/lars_identity.json \
--output ~/corlera-training/outputs/lars-lora
Track Projects: - LARS Training System (0dd041be) - Corlera Training System (dc4eeb11) - Training MCP Server (a6a12130)