Evolution and Tradeoffs
This page outlines the major architectural decisions behind PlanToCode and how the system changed as new workflows and constraints appeared.
Architecture timeline
This horizontal timeline diagram shows the evolution of PlanToCode across five major phases. Phase 1 'Editor Plugin Era' (leftmost): VS Code extension icon with features listed (inline suggestions, basic file context). Arrow labeled 'Limitation: No execution control' points to Phase 2. Phase 2 'Standalone Shell': Tauri desktop app icon with features (dedicated UI, local SQLite, PTY sessions). Arrow labeled 'Added: Job orchestration' points to Phase 3. Phase 3 'Multi-Stage Workflows': Workflow diagram icon showing connected stages (FileDiscovery, Scoring, Planning, Execution). Arrow labeled 'Added: Review gates' points to Phase 4. Phase 4 'Multi-Provider LLM': Server routing icon with provider logos (OpenAI, Anthropic, Google, OpenRouter) showing unified API normalization. Arrow labeled 'Added: Model flexibility' points to Phase 5. Phase 5 'Current Architecture' (rightmost): Complete system icon showing Desktop + Server + Mobile components. Below the timeline, key architectural decisions are annotated: 'Tauri over Electron' at Phase 2, 'SQLite as local truth' at Phase 2, 'Background job queue' at Phase 3, 'Server-side provider routing' at Phase 4, 'Cross-platform sync' at Phase 5. Tradeoff callouts include 'Binary size vs. ecosystem' (Tauri), 'Local-first vs. cloud sync' (SQLite), 'Latency vs. flexibility' (LLM routing).

Project Origins
PlanToCode began as an experiment in separating planning from execution: use LLMs to propose concrete implementation plans, then execute them via local tools and terminals. Early prototypes were editor-centric; over time, the architecture moved toward a dedicated desktop shell with tighter control over file access and job orchestration.
Technology Choices
- Tauri over Electron: Chosen for a smaller binary, a Rust backend, and a more constrained security model. It also enables shared logic between CLI-like workflows and the desktop app.
- SQLite as the local source of truth: A file-based database is easy to ship, snapshot, and inspect. It stores sessions, job metadata, and terminal history so workflows can be resumed or audited.
- Multi-provider LLM routing: The server supports multiple providers and models, normalizing responses and tracking usage centrally. This makes it easier to swap models without rewriting the desktop client.
What the system focuses on
The focus is on planning-first workflows: file discovery, multi-model plan generation, review, and execution handoff. These stages depend on external LLM providers for scoring and drafting, while the desktop app handles review, storage, and execution logs.