Back to Documentation
Product Guide

Implementation Plans

How PlanToCode enables confident adoption of AI coding agents through human-in-the-loop review, granular file-by-file plans, and clear handoff workflows.

6 min read

Review and approve every plan before execution. File-by-file granularity keeps scope explicit and changes aligned with your requirements.

Human-in-the-Loop Governance

PlanToCode keeps planning human-in-the-loop so you can review, edit, and decide when to hand off a plan for execution.

Plans are designed for a structured review workflow before any code modifications begin:

  • Review:Plans open in Monaco editor where reviewers can examine every proposed change with full syntax highlighting and professional editing tools.
  • Edit:You can directly modify steps, adjust approaches, add constraints, or remove risky operations using VS Code editing features.
  • Request Changes:Generate alternative plans or merge drafts with custom instructions to converge on the approach you want.
  • Approve:When you are ready, you can hand the plan off to a coding agent or developer for execution.
  • Discard:If a draft isn't useful, you can delete it from the session list.

This workflow keeps execution aligned with the plan you reviewed and helps prevent surprise changes.

File-by-File Granularity

Implementation plans use a highly granular structure that breaks down development tasks on a file-by-file basis, with exact file paths corresponding to the project's repository structure. This granularity makes scope explicit before any code is touched.

Each step in a plan explicitly declares which files will be:

  • Modified (with specific line ranges and changes described)
  • Created (with complete file paths and initial content structure)
  • Deleted (with justification and dependency analysis)
  • Referenced (for context but not modified)

Reviewers can immediately identify if critical legacy code will be modified, if breaking changes are proposed, or if the plan touches files that require additional scrutiny.

The file-by-file approach also enables precise transmission of approved plans to coding agents. Instead of vague instructions like "update the authentication system," agents receive exact specifications: "modify src/auth/session_manager.rs lines 45-67 to add token rotation, create src/auth/token_store.rs with the following structure..."

Plan Data Structure

Implementation plans are stored as raw LLM responses with associated metadata. The response text is preserved exactly as generated, while structured metadata tracks the plan context and usage.

Metadata Fields

  • planTitle - Generated or user-provided title for the plan
  • summary - Human-readable summary of the plan
  • sessionName - Name of the session that generated the plan
  • isStructured - True for implementation_plan jobs; false for merge outputs
  • isStreaming - False for completed plans (true during generation)
  • planData - Contains agent_instructions (optional) and steps array

Metadata Example

{
  "planTitle": "Authentication System Refactor",
  "summary": "Implementation plan generated",
  "sessionName": "my-project",
  "isStructured": true,
  "isStreaming": false,
  "planData": {
    "agent_instructions": null,
    "steps": []
  }
}

Implementation plan structure

XML format for implementation plans with file-by-file granularity and metadata.

Implementation plan XML structure diagram
Click to expand
Plan structure showing steps, files, and dependency tracking

Where the plans come from

Each plan corresponds to a background job in the current session. The panel subscribes to plan data, keeps track of which plan is currently open, and exposes navigation between earlier and newer jobs. This behaviour lives inside useImplementationPlansLogic and the surrounding panel component.

ImplementationPlanProcessor handles plan generation. It reads relevant files, optionally generates a directory tree based on selected root directories, and assembles a unified prompt for the LLM.

Plan responses are stored in the background_jobs table with metadata including planTitle, summary, sessionName, and token usage. The raw LLM response is preserved for review and debugging.

Plans stream via the LlmTaskRunner with real-time progress events. Token warnings are logged for prompts exceeding 100k tokens but processing continues with full content.

Plan Generation Pipeline

The ImplementationPlanProcessor orchestrates plan generation by loading file contents, building context, and streaming results through the LLM task runner.

Inputs: Session context, task description, selected relevant files, optional directory tree (configurable via include_project_structure flag), and web search flag for external research.

Prompt assembly: Uses prompt_utils::build_unified_prompt to combine task description, full file contents (no truncation), and directory tree into a model-specific format with estimated token counts.

Output: Raw LLM response stored as JobResultData::Text. Metadata includes planTitle, summary, token usage, cache statistics, and actual cost.

Display: Responses stream to the UI via progress events. Plans are rendered in a Monaco-based VirtualizedCodeViewer supporting syntax highlighting and copy actions.

Reviewing plans with Monaco

Plan content is rendered through the shared VirtualizedCodeViewer, which wraps Monaco Editor. The viewer automatically detects common languages, supports copy-to-clipboard actions, virtualises very large plans, and offers optional metrics such as character counts and syntax-aware highlighting.

When a plan is opened, the panel resolves the active plan by job identifier, passes the content to Monaco, and lets reviewers move between neighbouring jobs without losing the currently open modal.

Context and Metadata for Corporate Governance

The panel stores which repository roots were selected during the file discovery workflow so that follow-up actions reuse the same scope. It also records plan-specific metadata, such as the project directory and any prepared prompt content, so downstream prompts can be generated or copied without recomputing the workflow.

Token estimation runs before prompts are copied. The panel calls the token estimation command with the project directory, selected files, and the currently chosen model, surfacing both system and user prompt totals so teams can stay under model limits.

Plan metadata persists with each job so you can review which inputs were used (task description, selected roots/files, model settings) and compare drafts later.

Working with multiple plans

Plans can be merged, deleted, or reopened later. The panel keeps a list of selected plan identifiers, manages a dedicated modal for terminal output tied to a plan, and exposes navigation helpers so reviewers can page through earlier plans without closing the viewer. Terminal access, prompt copy controls, and merge instructions all share the same job identifier so plan history stays consistent.

Ready to adopt AI coding agents safely?

Human-in-the-loop implementation plans are available inside the PlanToCode desktop application. Download the build for your platform to experience safe, governed AI-assisted development.