page

Trained Model (LARS)

LARS - Local AI Runtime System

Base Model Options

  • Qwen 2.5 Abliterated - Current LARS base (7B-14B)
  • Qwen 3 Coder - For coding-focused deployments
  • Custom bases - Client-specific requirements

Training Stack

  • Hardware: RTX 3090s / RTX 4090s / A100s
  • Framework: Transformers + PEFT (LoRA)
  • Location: Local AI server (no cloud training)

Training Types

Identity Training

  • Who is LARS?
  • Relationship to Nexus
  • Client-specific customization
  • Personality and tone

Knowledge Training

  • Client documents and data
  • Domain expertise
  • Procedures and workflows
  • Historical context

Reasoning Training (3D Dataset)

  • Thinking modes
  • Multi-step problem solving
  • Context evaluation
  • Decision frameworks

Verified Training Loops

Not one-shot fine-tuning. Continuous loop: 1. Train on dataset 2. Test with evaluation suite 3. Score with AI judge (Claude) 4. Generate corrections for failures 5. Retrain with corrections 6. Loop until 98%+ accuracy

Model Storage

  • LoRA adapters for efficient switching
  • Multiple versions (identity, knowledge, reasoning)
  • Merge capabilities for final deployment
  • GGUF conversion for Ollama deployment
ID: d6e458c3
Path: Nexus AI Engine > Components > Trained Model (LARS)
Updated: 2026-01-01T20:02:55