LiveKit Self-Hosted Setup
Overview
Open source WebRTC server for real-time audio/video streaming.
Why LiveKit
- Fully open source (Apache 2.0)
- Self-hostable on your own infrastructure
- Python Agents SDK with Ollama integration
- Handles WebRTC complexity
- Supports STT/LLM/TTS pipeline orchestration
Installation
Server
# Download and run
livekit-server --dev
# Or via Docker
docker run -p 7880:7880 livekit/livekit-server --dev
Python SDK
uv add "livekit-agents[openai]~=1.3"
# or
pip install livekit-agents[openai]
Ports Required
- 7880: WebSocket signaling
- 7881: WebRTC over TCP
- 50000-60000/UDP: WebRTC media
- 443: HTTPS/TURN (production)
Ollama Integration
from livekit.plugins import openai
session = AgentSession(
llm=openai.LLM.with_ollama(
model="lars-trained",
base_url="http://localhost:11434/v1"
)
)
Our Customization
LiveKit is modular - we can: - Use Whisper for STT (keep) - Use LARS/Ollama for LLM (keep) - BYPASS LiveKit TTS - route to our InWorld AI pipeline instead
Links
- GitHub: https://github.com/livekit/livekit
- Agents: https://github.com/livekit/agents
- Docs: https://docs.livekit.io/
- Ollama Guide: https://docs.livekit.io/agents/integrations/llm/ollama/