Production AI chat workspace with multi-model routing, realtime sync, tool calling, and a local-first UX.
CappyChat is my take on a serious AI chat product, not just a thin wrapper over one model. It supports 30+ models, realtime sync, a local-first architecture, voice input, file uploads, image generation, and collaborative workflows. The product is optimized around responsiveness so users can switch models, recover prior context, and keep working without the interface feeling slow or fragile.
The later versions pushed the system much further with plan mode, AI artifacts, model-driven tool calling, web search, logging, and production hardening. That meant solving not only for prompt quality but also for rate limits, synchronization, bundle size, state management, and the UX details required to make a real AI application feel fast in everyday use.
Building a local-first chat experience that stays responsive while still syncing reliably across devices through Appwrite Realtime.
Supporting many model providers and capabilities without turning the interface into configuration overload.
Adding tool calling, artifacts, and web search in a way that keeps responses useful instead of noisy.
Hardening the product for production with logging, rate limiting, and performance work across a rapidly evolving feature set.
Current Status
Completed & Live
Last Update
2025