Multiple Wake Word Support
Capability
openWakeWord can run 15-20 models simultaneously on a single Raspberry Pi 3 core.
Use Cases
| Wake Word | Routes To | Purpose |
|---|---|---|
| "Hey LARS" | LARS/Ollama | Local AI, unrestricted tasks |
| "Hey Case" | Mobile AI | Lightweight, on-the-go assistant |
| "Hey Lena" | Nexus/Claude | Full Nexus integration |
Implementation
from openwakeword.model import Model
# Load multiple wake word models
models = {
"hey_lars": Model("models/hey_lars.tflite"),
"hey_case": Model("models/hey_case.tflite"),
"hey_lena": Model("models/hey_lena.tflite")
}
def on_wake_word(wake_word: str):
if wake_word == "hey_lars":
route_to_lars()
elif wake_word == "hey_case":
route_to_mobile_ai()
elif wake_word == "hey_lena":
route_to_nexus()
Training Each Wake Word
- Use Google Colab notebook for each phrase
- ~1 hour per wake word
- Export separate .tflite files
- Load all models at startup
Resource Usage
- Each model adds minimal CPU overhead
- All run in parallel, listening simultaneously
- Detection is instant when triggered