--- description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. --- The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty). User input: $ARGUMENTS Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file. Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases. Execution steps: 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields: - `FEATURE_DIR` - `FEATURE_SPEC` - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.) - If JSON parsing fails, abort and instruct user to re-run `/specify` or verify feature branch environment. 2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked). Functional Scope & Behavior: - Core user goals & success criteria - Explicit out-of-scope declarations - User roles / personas differentiation Domain & Data Model: - Entities, attributes, relationships - Identity & uniqueness rules - Lifecycle/state transitions - Data volume / scale assumptions Interaction & UX Flow: - Critical user journeys / sequences - Error/empty/loading states - Accessibility or localization notes Non-Functional Quality Attributes: - Performance (latency, throughput targets) - Scalability (horizontal/vertical, limits) - Reliability & availability (uptime, recovery expectations) - Observability (logging, metrics, tracing signals) - Security & privacy (authN/Z, data protection, threat assumptions) - Compliance / regulatory constraints (if any) Integration & External Dependencies: - External services/APIs and failure modes - Data import/export formats - Protocol/versioning assumptions Edge Cases & Failure Handling: - Negative scenarios - Rate limiting / throttling - Conflict resolution (e.g., concurrent edits) Constraints & Tradeoffs: - Technical constraints (language, storage, hosting) - Explicit tradeoffs or rejected alternatives Terminology & Consistency: - Canonical glossary terms - Avoided synonyms / deprecated terms Completion Signals: - Acceptance criteria testability - Measurable Definition of Done style indicators Misc / Placeholders: - TODO markers / unresolved decisions - Ambiguous adjectives ("robust", "intuitive") lacking quantification For each category with Partial or Missing status, add a candidate question opportunity unless: - Clarification would not materially change implementation or validation strategy - Information is better deferred to planning phase (note internally) 3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints: - Maximum of 5 total questions across the whole session. - Each question must be answerable with EITHER: * A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR * A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words"). - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation. - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved. - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness). - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests. - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic. 4. Sequential questioning loop (interactive): - Present EXACTLY ONE question at a time. - For multiple‑choice questions render options as a Markdown table: | Option | Description | |--------|-------------| | A |