Architecture Overview

PlanToCode is a Tauri desktop application with a Rust backend, a React/Next.js frontend, and a SQLite-backed local database. Planning and analysis depend on external LLM providers configured with your API keys. This page explains how those pieces fit together so you can reason about or adapt the design.

System map snapshot

This diagram depicts the PlanToCode system architecture as four interconnected layers arranged vertically. Top Layer - Desktop Frontend: A React/Next.js box containing components (Plan Viewer, Terminal Panel, Session Manager) connected via labeled arrows 'invoke()' and 'listen()' to Tauri IPC Bridge. Second Layer - Rust Backend: Contains WorkflowOrchestrator (scheduling multi-stage jobs), TerminalSessionManager (PTY lifecycle), and JobProcessors (FileDiscovery, PlanGeneration, TextImprovement, DeepResearch). Arrows show 'spawn()' to job threads and 'emit()' events back to UI. Third Layer - Persistence: SQLite database cylinder showing tables: sessions, background_jobs, terminal_output, project_settings, key_value_store. Bidirectional arrows labeled 'read/write' connect to Rust services. Fourth Layer - External Services: Server box with Auth (/api/auth), LLM Proxy (/api/llm/*), and Usage Tracking; arrows point outward to Provider icons (OpenAI, Anthropic, Google, OpenRouter) with 'route()' labels. Data flows: Task input flows down through layers; LLM responses stream back up via SSE; job status updates propagate via Tauri events. Key labels include 'HTTPS/WSS' on server connections, 'SQLite' on persistence, and 'FFI' on Rust-to-system calls.

Diagram showing PlanToCode system map
Click to expand
Four-layer architecture with data flowing down and events streaming back up.

Tauri Shell and Desktop Frontend

The desktop app bundles a React UI inside a Tauri shell. Frontend code calls Rust commands for tasks like file system access, terminal sessions, and background job orchestration.

Rust Core and SQLite Persistence

The Rust core manages background jobs, PTY sessions, and durable state. SQLite stores sessions, job history, and terminal output, acting as a local append-only record of what the system has done.

Background Job Orchestration

A workflow orchestrator schedules multi-stage jobs (file discovery, plan generation, research). Each stage is a Rust processor that can call out to LLM providers, read or write to the project, and emit streaming updates back to the UI.

Multi-Model LLM Integration

The server layer routes requests to different providers, normalizes responses, and tracks usage. The desktop app receives a unified streaming API regardless of provider.

How the Pieces Communicate

Commands flow from the React UI into Tauri, which invokes Rust functions. Long-running work is executed as background jobs that stream updates (including partial LLM tokens) back to the UI. SQLite acts as both a cache and a durable log so sessions and terminal history can be replayed or resumed.

For a deeper dive, read the architecture documentation and the build-your-own guides that map these concepts to code modules.