Training is NOT one-time. It's continuous.
Schedule Options: - Daily: High-velocity data environments - Weekly: Standard business use - Monthly: Stable knowledge bases - On-demand: After major document uploads
How Continuous Training Works:
- ACCUMULATE
- Nexus collects data throughout the period
- KB nodes, documents, context, transcripts
-
Each has timestamp for delta exports
-
EXPORT DELTA
- training.export(since=last_training_date)
- Only new/modified content
-
Append to master training set
-
FINE-TUNE
- LoRA adapter training on new data
- Merge with previous adapter OR train fresh
-
Configurable per client preference
-
VALIDATE
- Test model on held-out questions
- Compare to previous version
-
Rollback if quality drops
-
DEPLOY
- Hot-swap adapter in running model
- Zero-downtime update
- Version history maintained
Result: LLM gets smarter every cycle. More data in Nexus = better trained model = smarter AI.