Volver a Documentación
Reference

Prompt Types & Templates

Catalog of prompt-driven job types and template assembly.

8 min de lectura

Every LLM-backed job in PlanToCode uses a structured prompt built from templates. This document catalogs the job types and explains how prompts are assembled.

Job Type Catalog

implementation_plan

Implementation Plan

Generates file-by-file implementation plans with XML structure. Uses streaming for progressive display.

implementation_plan_merge

Plan Merge

Combines multiple plans with user instructions. Source plans wrapped in XML tags.

text_improvement

Text Improvement

Refines selected text while preserving formatting. Non-streaming for quick results.

root_folder_selection

Root Folder Selection

Analyzes directory tree to select relevant project roots. Returns JSON array.

regex_file_filter

Regex File Filter

Generates regex patterns for file filtering based on task description.

file_relevance_assessment

File Relevance Assessment

Scores file content relevance to task. Processes in batches.

extended_path_finder

Extended Path Finder

Discovers related files through imports and dependencies.

web_search_prompts

Web Search Prompts

Generates research queries for deep research workflow.

video_analysis

Video Analysis

Analyzes screen recordings for UI state and action sequences.

Template Structure

Prompts are assembled from system templates and user content:

Example template structure:

<system_prompt>
  You are an AI assistant that generates implementation plans.
  [template content from server]
</system_prompt>

<task>
  [user's task description]
</task>

<files>
  [selected file paths and content]
</files>

<directory_tree>
  [project structure]
</directory_tree>

Prompt assembly flow

How templates combine with user content to form complete prompts.

Prompt template assembly diagram
Click to expand
Placeholder for prompt assembly diagram.

Assembly Process

  1. Processor retrieves template ID from task model config
  2. System prompt template loaded from server cache
  3. User content wrapped in semantic XML tags
  4. Context (files, tree) added based on job type
  5. Complete prompt stored in job record before sending

Server-Side Configuration

Templates and model settings are configured server-side:

task_model_config defines: default_model, allowed_models, system_prompt_template_id, max_tokens, temperature

Token Guardrails

Each task type has token limits to prevent context overflow:

  • max_tokens_input: Maximum prompt size
  • max_tokens_output: Maximum response size
  • Validation before sending prevents wasted API calls
  • UI shows token count and warns when approaching limits

Template Versioning

System prompt templates are versioned for reproducibility. Each job records the template ID used, enabling audit and comparison of results across template versions.

Design Notes

  • XML tags provide clear boundaries for LLM parsing
  • Semantic naming (task, files, context) aids model understanding
  • Templates avoid instruction injection by sanitizing user input
  • Streaming jobs use end tags for completion detection

See job processing in action

Learn how these prompts flow through the job system.