Components: 1. Base Model: Qwen 2.5 7B Abliterated (uncensored) 2. Training: LoRA fine-tuning via PEFT/Transformers 3. Serving: Ollama for inference 4. Integration: MCP server for Nexus access
Training Pipeline: 1. Datasets stored in Nexus Training environment 2. Training scripts at ~/corlera-training/scripts/ 3. LoRA adapters output to ~/corlera-training/outputs/ 4. Merged models deployed to Ollama
MCP Integration: - LARS Gateway MCP Server (Track: 1370088e) - Allows Claude to delegate tasks to LARS - LARS can access all Nexus environments