Personal Robot
A fully local, open-source robot built in progressive phases — from a conversational LLM running on a laptop GPU (Phi-4-mini via Ollama) through persistent RAG memory (ChromaDB), voice input/output (Whisper & Piper TTS), computer vision (YOLOv8 & Qwen2.5-VL), and face recognition, culminating in a self-contained physical robot deployed on a Raspberry Pi 5. No cloud APIs — all inference runs on local hardware.