Back to Docs
Product Guide

Implementation Plans

How PlanToCode enables confident adoption of AI coding agents through human-in-the-loop governance, granular file-by-file plans, and comprehensive review workflows.

6 min read

Review and approve every plan before execution. Human-in-the-loop governance with file-by-file granularity ensures AI-generated changes align with corporate requirements and team workflows.

Human-in-the-Loop Governance

PlanToCode implements a comprehensive human-in-the-loop (HITL) workflow that ensures team leads and stakeholders retain full control over every aspect of AI-generated implementation plans. This governance model prevents the regressions, bugs, and unintended modifications that can occur when AI coding agents operate autonomously.

Every plan must pass through a structured review workflow before any code modifications begin:

  • Review:Plans open in Monaco editor where reviewers can examine every proposed change with full syntax highlighting and professional editing tools.
  • Edit:Stakeholders can directly modify steps, adjust approaches, add constraints, or remove risky operations using VS Code editing features.
  • Request Changes:Teams can request modifications from the AI system, generating alternative approaches or merging multiple plans with custom instructions.
  • Approve:Only after explicit approval can plans be securely transmitted to the chosen coding agent or assigned software developer for execution.
  • Reject:Plans that don't meet requirements can be rejected entirely, with full audit trails maintained for compliance and learning.

This workflow ensures all development efforts align with corporate product requirements, team workflows, and business objectives. No code changes occur without explicit human approval.

File-by-File Granularity

Implementation plans use a highly granular structure that breaks down development tasks on a file-by-file basis, with exact file paths corresponding to the project's repository structure. This granularity is fundamental to preventing regressions and enabling confident adoption of AI coding agents in corporate environments.

Each step in a plan explicitly declares which files will be:

  • Modified (with specific line ranges and changes described)
  • Created (with complete file paths and initial content structure)
  • Deleted (with justification and dependency analysis)
  • Referenced (for context but not modified)

This level of detail makes the impact of proposed changes crystal clear before any code is touched. Team leads can immediately identify if critical legacy code will be modified, if breaking changes are proposed, or if the plan touches files that require additional scrutiny.

The file-by-file approach also enables precise transmission of approved plans to coding agents. Instead of vague instructions like "update the authentication system," agents receive exact specifications: "modify src/auth/session_manager.rs lines 45-67 to add token rotation, create src/auth/token_store.rs with the following structure..."

Where the plans come from

Each plan corresponds to a background job in the current session. The panel subscribes to plan data, keeps track of which plan is currently open, and exposes navigation between earlier and newer jobs. This behaviour lives insideuseImplementationPlansLogic and the surrounding panel component.

Reviewing plans with Monaco

Plan content is rendered through the shared VirtualizedCodeViewer, which wraps Monaco Editor. The viewer automatically detects common languages, supports copy-to-clipboard actions, virtualises very large plans, and offers optional metrics such as character counts and syntax-aware highlighting.

When a plan is opened, the panel resolves the active plan by job identifier, passes the content to Monaco, and lets reviewers move between neighbouring jobs without losing the currently open modal.

Context and Metadata for Corporate Governance

The panel stores which repository roots were selected during the file discovery workflow so that follow-up actions reuse the same scope. It also records plan-specific metadata, such as the project directory and any prepared prompt content, so downstream prompts can be generated or copied without recomputing the workflow.

Token estimation runs before prompts are copied. The panel calls the token estimation command with the project directory, selected files, and the currently chosen model, surfacing both system and user prompt totals so teams can stay under model limits.

All metadata persists with the plan for audit purposes. Corporate teams can track which stakeholders reviewed which plans, what modifications were requested, and the complete reasoning chain from initial task description through file discovery to final approved plan.

Working with multiple plans

Plans can be merged, deleted, or reopened later. The panel keeps a list of selected plan identifiers, manages a dedicated modal for terminal output tied to a plan, and exposes navigation helpers so reviewers can page through earlier plans without closing the viewer. Terminal access, prompt copy controls, and merge instructions all share the same job identifier so audit history stays consistent.

Ready to adopt AI coding agents safely?

Human-in-the-loop implementation plans are available inside the PlanToCode desktop application. Download the build for your platform to experience safe, governed AI-assisted development.