8 minute read

I wanted to put down my workflow and all the details for using SpecKit using a superior reasoning and planning LLM (like Opus 4.5 via Claude Code, or Gemini 3 via the Gemini CLI) for the initial phases, but switching to an agentic AI IDE like Cursor or GitHub Copilot in VS Code for the final coding phases. I’m using Git and the GitHub CLI for repository actions.

This workflow does not include “authenticating”. You still need to configure and log in to each tool separately. They’ll generally prompt you when you need to do so.

Install dependencies

You need to install the following tools. I use Winget where I can, since I’m on a Windows machine. These instructions are pretty much the same on MacOS or Linux; however, the method of installation is different.

  • Python 3.11+ for running the SpecKit CLI.
  • UV is the Python Package Manager that SpecKit prefers.
  • GitHub CLI for repository management.
  • Claude Code or Gemini CLI for running the planning phase.
  • Visual Studio Code or Cursor for running the execution phase.
  • SpecKit Companiion (if using VS Code) for running SpecKit inside the editor.

You can install all this using the following:

winget install --id Python.Python.3.12
winget install --id astral-sh.uv
winget install --id GitHub.CLI
winget install --id Anthropic.ClaudeCode
winget install --id Microsoft.VisualStudioCode
winget install --id Microsoft.VisualStudioCode.CLI

I prefer GitHub Copilot over Cursor, but you can swap that out. SpecKit Companion is available on the Visual Studio Code extension marketplace.

Once you’ve done all that, you’ll also need to install SpecKit with the following command:

uv tool install specify-cli --from git+https://github.com/github/spec-kit.git

# Verify installation
specify --version

Check out the Quick Start Guide for Spec Kit to familiarize yourself with the application and also to verify that the install instructions are still correct.

Initialize a project

Start by using the GitHub CLI to create a new remote repository and Spec Kit to scaffold the SDD (Spec Driven Development) files locally:

# Create a remote repository on GitHub (private or public as desired)
gh repo create my-project --public --clone --confirm

# Change directory into the new project folder
cd my-project

At this point, I like to write a README.md which describes the project as a whole quickly. However, this is optional. Once done, use the installed specify command to create the necessary templates and structure:

specify init .

This will generate the .specify folder in your project.

Specification and Planning phase

Use Claude Code for the high-level reasoning and planning phases. Claude’s large context window and strong reasoning (Opus 4.5) excel at creating accurate, unambiguous artifacts. You could also use Gemini 3 here (I haven’t, so can’t comment on quality).

Establish a constitution

The consitution defines the non-negotiable rules for your project (e.g., tech stack, testing standards).

# Start the claude code interactive session
claude

# You may need to sign in with /login
/login

# Run the constitution command with your prompt
/speckit.constitution Create principles focused on using dotnet 10 with Blazor and MudBlazor, 90% unit test coverage for all logic, mermaid for specification diagrams

This generates a file .specify/memory/constitution.md - you should review this file and make any changes you need.

Create the specification

The specification is the “what” and “why” of your project or feature.

# Specify the feature name
$env:SPECIFY_FEATURE="001-mvp"

# Run Claude Code
claude

# Build the specification
/speckit.specify Build a task management app with user authentication, real-time collaboration, and mobile support. Users should be able to create projects, assign tasks, and track progress with Kanban boards.

This generates the file specs/001-mvp.md - the core blueprint. The specification translates your high-level natural language prompt into a structured document containing: Goal, User Scenarios, Functional Requirements, and Acceptance Criteria. You must ensure the agent accurately captures all of your intent and doesn’t introduce unwanted complexity or omit necessary features.

Review checklist:

  • Completeness: Does the specification cover everything you intended for the feature?
  • Accuracy: Are the requirements correctly stated?
  • Unambiguity: Is every requirement clear?
  • Adherence to constitution: Does the specification violate any rule you set in the constitution?

Now that you have a base specification, you can clarify and refine the specification using generative AI:

/speckit.clarify

The clarify command doesn’t generate a new file. Instead, it generates direct output in the Claude Code terminal (or chat window), allowing you to further refine the specification through a question and answer session.

Review checklist:

  • Understanding: Does the question highlight a genuine missing piece of information?
  • Completeness: Your human responsibility is to then go back and directly edit the spec file to answer these questions directly.
  • Iteration: You may run /speckit.clarify multiple times until the agent returns minimal or no questions, indicating the specification is robust and complete.

If the clarify stage reveals a fundamental misunderstanding, you should not just edit the spec file manually. Re-run the specify command with a more detailed prompt to regenerate the baseline structure before refining it.

After final edits (and before committing), you should do one final check on the specification file before committing it.

Create the technical plan

The plan is the “how” of the project or feature.

/speckit.plan Build the base ASP.NET Core application with the health endpoints.

The /speckit.plan command instructs the AI agent to read the approved specification and the constitution and generate a technical plan. This is placed in specs/001-mvp/plan.md. Depending on the complexity of the prompt, the agent may also create supplementary files, such as:

  • data-model.md
  • api-spec.md
  • architecture.png

Review checklist:

The plan is where the AI defines the architecture, so your review must focus on technical suitability and adherence to standards.

  • Architecture & Design: Does the plan use the correct design patterns? Is the proposed solution over-engineers or under-engineered for the feature size?
  • Tech Stack Adherence: Does the plan strictly follow the rules laid out in your constitution.md? For example, if you forbade ORMs or required a specific testing framework, the plan must reflect this.
  • Data Models: Are the proposed database schemas (tables, fields, relationships) correct, efficient, and complete for the features needs?
  • API Contracts: Are the proposed API endpoints, request/response bodies, and error codes logical and consistent with existing system contracts?
  • Constraints: Does the plan account for non-functional requirements from the specification, such as performance targets, security, and scalability?

Break the plan into granular tasks

/speckit.tasks Break the plan into a list of atomic, testable tasks.

The output is the .specify/tasks.md file, which is a sequential checklist of small work items (e.g. T001: Create data model).

This concludes the planning phase; exit Claude Code and commit the final artifacts before switching:

git add .specify/
gh commit -m "feat: [MVP] MVP Specification and technical plan"

Implementation phase

Now switch to the AI IDE that you prefer for implementation speed. Spec Kits artifacts provide all the context necessary to complete each task in the task list.

Set up the environment

Open the project in your chosen IDE (VS Code for Copilot, or the Cursor IDE). Ensure the respective AI tool is active and authenticated, and that your chosen language / runtime has been installed and configured correctly. This is also a good time to ensure your testing environment is available (e.g. if you are using TestContainers, then start the Docker engine).

Implement tasks

You now need to instruct the agent to execute the tasks that were generated:

GitHub Copilot Chat:

  1. Open the Copilot Chat window
  2. Ask it to implement the tasks with the prompt: Read .specify/tasks.md and implement task 1 following the plan in .specify/plan.md.

Cursor:

  1. Open the Cursor Agent/Chat window.
  2. Instruct it in Max Mode: Using the tasks defined in .specify/tasks.md, implement task 1 following the plan in .specify/plan.md

After each task, you should review the new code (the “compare window” is a good choice here), run any tests, and then commit the code.

Configuring Cusor and GitHub Copilot

When you see /speckit.implement in tutorials, it is usually referring to a pre-configured “slash command” that pastes a specific, complex prompt into the AI’s chat window. It tells the AI “Read the plan and tasks files, then write the code for the active task.” Since this workflow uses two different LLMs, you don’t necessarily get the same slash commands, but you can emulate them.

Cursor

Cursors Agent Mode (“Composer”) can run terminal commands and edit files directly. You can create a “rule” or “command” that acts exactly the same way as the /implement commmand:

  1. Create a new file in your project at .cursor/rules/implement.mdc
  2. Paste the following prompt into that file. This teaches Cursor what to do:
  # Spec Kit Implementation Rule

  When I ask you to "implement" or use "/implement":
  
  1. **Read Context**:
    * Read `.specify/memory/constitution.md` (for rules)
    * Read `.specify/specs/*.md` (for requirements)
    * Read `.specify/plan.md` (for architecture)
    * Read `.specify/tasks.md` (for the to-do list)

  2. **Determine Active Task**:
    * Look at `tasks.md`.  Find the first unchecked task (e.g., `[ ] T001...`).
    * **Goal**: You goal is to complete ONLY this one task.

  3. **Execute**:
    * Write the code necessary to complete the task.
    * Create new files if the plan dictates it.
    * **Constraint** Do not deviate from the `plan.md`.  If the plan is impossible, stop and ask me.

  4. **Update State**:
    * After the code is written and verified, update `tasks.md` by marking the task as `[x]`.

Now you can use this command inside composer or chat; e.g. Run /implement. Cursor will read your rule, find the next task, write the code, and check off the box automatically.

GitHub Copilot

GitHub Copilot does not natively support custom “slash commands” that execute multi-step logic unless you use a specific extension or the “Prompt Files” feature. I use the SpecKit Companion.

The extension adds a “SpecKit” UI panel. Instead of typing /implement, you simply click the “Implement Next Task” button in the extension sidebar. This will automatically feed the correct context to Copilot.

Final words

Of course, it’s easier if you use the same LLM for both planning and execution of an agentic coding session. However, I find that using a great reasoning LLM for the planning and a different coding focused LLM for the execution gets better code results. It’s not much more work, with SpecKit doing the majority of the heavy lifting for you.

Tags:

Categories:

Updated:

Leave a comment