AI runs in your browser via WebAssembly or WebGPU (Chrome, Edge, Safari) — nothing leaves your machine
📚 Blogs
💬 Chat
🧠

Chat with a blog's ideas — fully local

1. Pick a backend and model in the toolbar, then click Load.
2. Paste a blog URL and click Create Twin.
3. Chat — everything runs in your browser via WASM or WebGPU.

Embedder starting…
⚙ WebAssembly · all browsers
Backend
Model
0%
not loaded
Preparing…
  1. Loading embeddings model
  2. Loading LLM (Qwen3-0.6B)
  3. Fetching blog
  4. Indexing articles
  5. Starting chat