AI Coding Tools Guide: November 2025

Introduction

Here's a practical, up-to-date comparison of today's AI coding tools. We'll start with a landscape overview of what's available, then deep-dive on the top tools, with a focus on "developer-experience boosters" like saved instructions/skills, agentic workflows, repo-level context, and pricing.

Unlike traditional code completion, modern AI coding tools offer development context featuresβ€”saved instructions, automated task management, and persistent project knowledge. These features help agents work more effectively by giving them instructions and tools that persist across sessions.


πŸ“¦ The Landscape at a Glance

Terminal / Agent-First Tools

  • Claude Code β€” Terminal + web experience that can read/edit your repo, run commands/tests, and open PRs; now also has a web UI that runs isolated cloud VMs linked to your GitHub. Included in Claude Pro/Max; also available via Team/Enterprise premium seat.
  • OpenCode β€” Open-source terminal agent with a native TUI, LSP-aware context, parallel sessions, "bring any model," and an add-on model gateway called Zen for pay-as-you-go model access. Enterprise deploys available.
  • OpenAI Codex β€” OpenAI's agent for developers across terminal, web, and IDEs; powered by GPT-5-Codex (priced the same as GPT-5 in the API). Included with ChatGPT subscriptions; API access available.
  • Cline β€” "Plan/Act" agent inside VS Code with MCP tool adapters; free extension, BYO API keys or hosted.
  • Aider β€” Chat-driven, git-aware coding assistant that proposes diffs in your terminal.
  • Roo Code β€” Open-source agent with modes (Architect, Code, Ask, Debug) and MCP server support; the extension is free, cloud add-ons are paid.

IDE-First Assistants

  • GitHub Copilot β€” Multimodel code completion + chat + agent mode + code review; deeply integrated across VS Code, JetBrains, Visual Studio, Xcode and GitHub.com. Clear, published pricing from Free β†’ Pro β†’ Pro+.
  • Cursor IDE β€” A full IDE centered on agentic coding ("Auto," rules/policies, Composer), team controls and enterprise security; usage-based Pro/Teams/Enterprise pricing.
  • Amazon Q Developer β€” AWS's agentic assistant for IDE/CLI/Console with features like large-scale code transformations (e.g., Java 8β†’21), org policies, and a Free + $19 Pro tier.
  • JetBrains AI Assistant β€” JetBrains' built-in assistant for IntelliJ family; monthly add-on pricing for individuals.
  • Windsurf β€” An AI IDE by Cognition with an in-house SWE model (SWE-1.5) aimed at fast agentic coding; separate pricing.
  • Codeium β€” Codeium completions/chat and the Windsurf editor pair; Windsurf pricing covers model usage tiers.
  • Sourcegraph Cody β€” Enterprise-oriented AI for code search, chat, and refactors across large monorepos; published pricing.
  • Replit AI β€” Hosted dev environment with integrated AI (Ghostwriter/Agents).
  • Tabnine β€” Privacy-minded code completion/chat (local and cloud models).
  • Gemini Code Assist β€” Google's IDE assistant with enterprise governance; pricing published.
  • Bolt.new β€” Web-based "build apps with AI" platform with plans and a growing end-to-end dev workflow.
  • Phind β€” Dev-focused search + code assistant with Pro/Business plans.

πŸ” Deep-Dive: Top 5 Tools

1) Claude Code (Anthropic)

What it is & where it runs

  • Agentic coding from your terminal, plus a web experience (research preview) that spins isolated cloud environments connected to your GitHub and can branch, test, and open PRs. There's also a VS Code extension.

Model support

  • Built around Claude Sonnet 4.5 for agentic coding; your plan also determines access to Opus/Haiku variants.

Developer-experience boosters

  • Skills (saved, shareable instructions/tools) to standardize how the agent works across tasks and repos; include prompt + tool definitions and can be versioned/shared via GitHub.
  • Terminal autonomy controls (permissions & checkpoints) to review changes step-by-step; tips/best practices from Anthropic's engineering team.
  • Web workflow runs tasks in parallel, with progress and PR creation from the browser.

Interface details

  • Terminal UI with session history/controls; Web UI for multi-task runs tied to GitHub; "on the web" docs outline setup and supported environments.

Pricing (individual & org)

  • Pro: $20/mo ($17/mo annual) β€” includes Claude Code access (web + terminal) and higher usage.
  • Max: from $100/mo per person for 5Γ— or 20Γ— usage multipliers and Memory.
  • Team: Standard $25/seat; Premium $150/seat includes Claude Code; Enterprise custom.

Best for

  • Power users who want a first-class terminal agent with shareable skills and multi-surface (terminal + web) execution, plus orgs needing premium seats and SSO/SCIM.

2) OpenCode (open-source, by SST/Anomaly)

What it is & where it runs

  • An open-source terminal agent (native TUI) that's model-agnostic and LSP-aware, with parallel agent sessions, shareable session links, and "runs anywhere."

Model support

  • "Any model" via your own keys or Zen, a curated gateway with transparent per-million-token pricing (no markup) across multiple providers/models (e.g., Claude 4.5, GPT-5/5-Codex, Qwen, GLM, etc.).

Developer-experience boosters

  • Agents scoped by session and central config for enterprises (SSO, internal gateways) to enforce allowed models/tools and org defaults.
  • LSP-enabled context and a privacy-first posture (doesn't store your code/context).

Interface details

  • Native terminal UI; integrates with any editor (pair your terminal with VS Code/JetBrains/Neovim).

Pricing

  • OpenCode itself is free/open-source;
  • Zen is pay-as-you-go per model (e.g., GPT-5/GPT-5-Codex listed as $1.25 input / $10 output per 1M tokens; Claude Sonnet 4.5 pricing mirrors Anthropic).

Best for

  • Teams that want open tooling, strict data control, and provider-agnostic agents with easy model budgeting via Zen.

3) GitHub Copilot

What it is & where it runs

  • Code completions, chat, agent mode, PR review comments, CLI, and deep editor integration across VS Code, JetBrains, Visual Studio, Xcode, and GitHub.com.

Model support

  • Multi-model (e.g., Claude Sonnet 4/4.5, GPT-5/5-mini, Gemini 2.5 Pro); GitHub retires older models over time to maintain speed/quality.

Developer-experience boosters

  • Agents on GitHub (context from your repos/issues/PRs), Copilot Code Review, and IDE agent mode with organization policies and MCP support.

Interface details

  • Inline suggestions, chat panels, PR review on GitHub, and a CLI; premium request buckets control usage of frontier models/features per plan.

Pricing

  • Free ($0): 2,000 completions + 50 agent/chat requests/mo.
  • Pro ($10/mo): unlimited completions, coding agent, 300 premium requests;
  • Pro+ ($39/mo): more models (incl. Opus 4.1), 1,500 premium requests. (Business/Enterprise variants add admin/IP indemnity.)

Best for

  • Orgs already on GitHub that want turnkey onboarding, model choice, and repo-native reviews at a very clear price.

4) Cursor

What it is & where it runs

  • An IDE built around agents: Auto (router across models), Composer, Rules/policies, PR review, enterprise controls and analytics.

Model support

  • Multiple frontier models via routing; enterprise can set allowed models and privacy/SSO controls.

Developer-experience boosters

  • Rules to encode team conventions and constraints; enterprise privacy mode, RBAC, SAML/SCIM; analytics on agent usage.

Interface details

  • Full desktop IDE with chat/agents, inline edits, Composer tasks, and policy surfaces for admins.

Pricing (2025 changes)

  • Cursor shifted to usage-based pricing with credit pools; public pages and docs note Pro from ~$20, Teams $40/seat, and custom Enterprise; blog explains the move from request-based caps to usage credits and an "Auto" unlimited mode.

Best for

  • Teams that want a single IDE optimized for agent workflows with strong admin controls and policies.

5) Amazon Q Developer

What it is & where it runs

  • AWS's agent inside IDE, CLI, and AWS Console, focused on building/debugging and large code transformations (e.g., Java 8β†’17/21 with dependency upgrades) plus org policy/analytics.

Model support

  • Uses AWS-served models and services behind the scenes; integrates with your AWS identity/policies and (newly) can answer AWS pricing queries via Price List APIs.

Developer-experience boosters

  • Transformation hub for language/runtime upgrades; enterprise policy and identity center integration; code license/reference tracking.

Interface details

  • IDE panels for propose/accept diffs; limits/tiers documented (Free vs Pro) incl. agentic interactions and transformation line counts.

Pricing

  • Free tier with monthly limits (e.g., 50 agentic chats and 1,000 lines transformed/mo).
  • Pro: $19/user/mo with higher limits (e.g., 1,000 agentic chats/mo, more transformation capacity).

Best for

  • AWS-centric teams who want governed agentic coding and managed code upgrades with clear org controls.

πŸ“Š Side-by-Side Quick Compare (Top 5)

ToolPrimary Surfaces"Agentic" FeaturesInstruction/Knowledge ScopingModel ChoicePricing Snapshot
Claude CodeTerminal, Web (GitHub-linked), VS CodeRuns commands/tests, edits repo, opens PRs; checkpoints & permissionsSkills (saved instructions/tools, shareable)Claude 4.5+ familyPro $20/mo; Max from $100; Team Premium $150/seat; Enterprise custom
OpenCodeTerminal (native TUI); pairs with any IDEMulti-session agents; LSP-aware contextOrg central config; provider-agnostic; privacy-firstAny provider (via keys) or Zen curated gatewayTool is free; Zen pay-as-you-go by model (e.g., GPT-5/5-Codex $1.25 in / $10 out per 1M tokens)
GitHub CopilotVS Code, JetBrains, Visual Studio, Xcode, GitHub.com, CLIAgent mode, code review, PR commentsOrg policies, MCP integration; Enterprise indexingMulti-model (Claude, GPT-5, Gemini, etc.)Free; Pro $10; Pro+ $39 (premium request quotas)
CursorCursor IDE (desktop)Auto routing, Composer, Rules; enterprise analyticsRules/policies, SSO/SCIM, privacy modeFrontier models via routerPro (usage-based, ~$20); Teams $40/seat; Enterprise custom
Amazon Q DeveloperIDE (VS Code/JetBrains), CLI, AWS ConsoleAgentic chat + code transformations (e.g., Java upgrades)IAM-governed policies, org controls, analyticsAWS-managedFree tier; Pro $19/user/mo with higher limits

🎯 Notable Alternatives

  • Sourcegraph Cody β€” Enterprise-grade AI code search/chat/refactor across huge monorepos; published Team/Enterprise pricing.
  • Windsurf β€” AI IDE with SWE-1.5 fast agent model; premium tiers available.
  • JetBrains AI Assistant β€” Add-on for JetBrains IDEs; monthly price for individuals, org plans vary.
  • Gemini Code Assist β€” Google's governed IDE assistant; pricing published.
  • Tabnine β€” Privacy-first completions/chat (local or cloud); multiple plans.
  • Replit AI β€” Hosted dev env + AI agents.
  • Bolt.new β€” AI builder platform (editor + hosting/integrations), with Free/Pro/Teams/Enterprise and tokens; rapidly evolving.
  • Phind β€” Dev-focused search/coding with paid Pro/Business.
  • Cline / Roo Code / Aider β€” Open-source agents and CLI assistants for terminal/VS Code workflows.

πŸ’‘ Best Practices Across All Tools

These practices work regardless of which AI coding tool you're using:

1. Explain Current Behavior AND Desired Outcome

Don't just say "add a login feature"β€”explain what currently happens ("users can't access protected routes") and what you want ("users should authenticate via OAuth and access their dashboard"). Context about the current state helps AI understand what to preserve and what to change.

2. Mention Specific Files

Reference concrete files as examples or for functions you want the AI to use. Instead of "use the same pattern as our other forms," say "follow the pattern in app/components/ContactForm.tsx and use the validateEmail function from app/utils/validation.ts."

3. Use Plan Mode First

Have the AI explain its plan before writing code. Audit the plan, ask questions, suggest changes until you're satisfied. This catches architectural issues before they're written into hundreds of lines of code.

Example workflow:

  • "Create a plan for adding user authentication"
  • Review the plan
  • "Update the plan to use our existing database schema in prisma/schema.prisma"
  • Approve and proceed

4. Save Plans to a Plans Directory

Create a plans/ directory and save detailed plans with all context needed to execute the task. Include:

  • Current state description
  • Desired outcome
  • Files to modify/create
  • Dependencies and considerations
  • Step-by-step approach

Share these with your team and commit them to version control. This creates an audit trail and lets team members pick up tasks or review implementation later.

5. Maintain a Plan-Progress File

Create a plan-progress.md file for each major task with:

  • Todo list of steps
  • Completed items
  • In-progress work
  • Blockers and decisions made

Have the AI keep this updated as you work. It serves as a living document of what's done and what remains.

Example structure: A markdown file with sections for Completed (with checkboxes), In Progress, Blocked items, and Decisions Made. Include specifics like which files were modified and key architectural choices.

6. Use Multiple Threads/Conversations

Don't try to implement everything in one conversation. Instead:

  • Reference the plan and plan-progress files in each conversation
  • Implement one train of thought per conversation (e.g., "database schema changes" in one thread, "UI components" in another)
  • This prevents context pollution and makes it easier to track different aspects of the work

7. Have AI Run Git Diff to Audit Changes

Before committing, ask the AI to run git diff and review its own changes. This catches:

  • Unintended modifications
  • Debug code left in
  • Missing changes
  • Inconsistent formatting

Example: "Run git diff and review the changes. Make sure we haven't accidentally modified any authentication logic outside our scope."

8. Ask for Multiple Solutions

Don't settle for the first approach. Ask: "Give me 3 different approaches to implement this, with pros/cons for each."

This reveals trade-offs you might not have considered and helps you choose the best approach for your specific constraints.

9. Run Tests, Linting, and Type Checking

Have the AI:

  • Run tests and fix failures without removing or weakening tests
  • Run linting and fix issues
  • Run type checking and fix type errors without lazy types like any

Example: "Run npm run test and fix any failures. Do not remove tests or make them less strict. Then run npm run typecheck and fix type errors without using any types."

10. Build Vertical and Horizontal Documentation

Create documentation files for:

  • Verticals (features): docs/features/authentication.md, docs/features/payments.md
  • Horizontals (layers): docs/layers/api.md, docs/layers/database.md, docs/layers/auth.md

Reference these when implementing at cross-sections. For example, when adding payment history to user profiles, reference both docs/features/payments.md and docs/layers/database.md.

11. Create Custom Agents/Skills

For tools that support it (Claude Code Skills, OpenCode Agents), create reusable instructions:

Examples:

  • "Code Review Agent" - Checks for common issues, runs tests, reviews diffs
  • "Database Migration Agent" - Follows your schema conventions, generates migrations, updates docs
  • "API Consistency Agent" - Ensures new endpoints follow your REST conventions, error handling patterns, and response formats
  • "Documentation Agent" - Updates relevant docs when code changes, maintains consistency

These automate context incorporation beyond what plan files can do, making the AI more effective across sessions and team members.


🎯 Example Workflow Putting It All Together

Here's how these practices work in a real scenario:

Task: Add a commenting system to blog posts

  1. Initial request: "I want to add comments to blog posts. Currently, users can read posts but not interact. I want authenticated users to be able to comment, edit their own comments, and see comment threads. Reference the post model in app/models/post.server.ts and follow the form pattern from app/components/ContactForm.tsx."

  2. Create plan: "Create a detailed plan for this feature and save it to plans/blog-comments.md"

  3. Review and refine plan: Audit the plan, ask for multiple database schema options, choose the best approach.

  4. Create progress tracker: "Create a plan-progress file at plans/blog-comments-progress.md with all the implementation steps as todos."

  5. Implement in multiple conversations:

    • Thread 1: Database schema and migrations
    • Thread 2: Comment model and server functions
    • Thread 3: UI components
    • Thread 4: API routes and validation
  6. In each thread: Reference both the plan and progress files, have AI update progress as work completes.

  7. Quality checks: "Run git diff and review changes. Then run tests, linting, and type checking. Fix any issues without using any types or weakening tests."

  8. Documentation: "Update docs/features/comments.md with the implementation details and docs/layers/database.md with the new schema."

  9. Final review: "Give me 3 potential improvements or edge cases we should consider before shipping."

This systematic approach works across all AI coding tools and dramatically improves code quality and team collaboration.


🎯 Recommendations by Need

You want terminal-native autonomy + shareable "skills":

  • Pick Claude Code for first-party skills and polished terminal/web agents; or OpenCode if you need open-source, any-model control with Zen budgeting.

You live on GitHub and need clear pricing & governance:

  • Copilot Pro/Pro+ gives agent mode, PR review, and multi-model flexibility with org policies out of the box.

You want an IDE that's built around agents/policies:

  • Cursor if you value rules, admin controls, and usage analytics (mind the usage-based pricing).

You're AWS-centric and planning modernizations:

  • Amazon Q Developer for governed transformations and console/IDE integration at $19/seat.

πŸ§ͺ What to Pilot First (Playbook)

  1. Try Claude Code + Skills on a medium repo to see if skills reduce hand-holding across repeated tasks (linting, UI nits, schema edits). Track accepted PRs per hour.

  2. Run OpenCode for the same tasks; compare cost per accepted change and whether LSP-aware context cuts retries.

  3. If your code lives on GitHub, test Copilot Pro on PR review + agent mode against your house style and security baselines.

  4. For AWS shops, queue a Q Developer code transformation (e.g., Java 8β†’17 or database layer migration) on a non-trivial service and measure manual cleanup delta.

  5. If your team wants an all-in IDE with org policies, spin a Cursor pilot and check how Rules impact consistency across squads.


πŸ“ Notes on OpenAI Codex

What's new: OpenAI introduced GPT-5-Codex, a version of GPT-5 tuned for agentic coding. It's the same API price as GPT-5 and available via API and within ChatGPT/Codex experiences.

Where it runs: Terminal/CLI, web, and IDE integrations (including GitHub/VS Code) as part of the Codex product family.


πŸ’° Pricing Quick Hits (as of November 16, 2025)

  • Claude: Pro $20/mo; Max from $100; Team Premium $150/seat (Claude Code included); Enterprise custom.
  • OpenCode: Free OSS; Zen pay-as-you-go per model (e.g., GPT-5/GPT-5-Codex $1.25 in / $10 out per 1M tok; Anthropic prices mirrored).
  • GitHub Copilot: Free β†’ Pro $10 β†’ Pro+ $39 with premium request buckets.
  • Cursor: Pro (usage-based, ~$20), Teams $40/seat, Enterprise custom; moved from request caps to usage credits.
  • Amazon Q Developer: Free tier + Pro $19/user/mo with higher limits (agentic chats, transformation lines).
  • OpenAI Codex: GPT-5-Codex at GPT-5 API prices; also included in ChatGPT paid tiers.

Conclusion

AI coding tools have evolved from simple autocomplete to sophisticated agents that can autonomously handle complex development tasks. The key differentiators now are development context featuresβ€”skills, instructions, and persistent project knowledge that help tools work more effectively across sessions.

The best approach is experimentation: try multiple tools, compare their behavior on the same tasks, and build workflows that leverage complementary strengths. As these tools evolve rapidly (sometimes weekly), staying adaptable and testing new capabilities will be your biggest advantage.

Whether you're looking for Claude Code's shareable skills, OpenCode's model flexibility, or enterprise governance from Copilot or Amazon Q Developer, there's never been a better time to integrate AI into your development workflow.


Need Help Adopting AI in Your Development Workflow?

For Teams: If you're struggling to get your development team using AI effectively, I offer AI Training for Software Engineersβ€”a 6-week program that helps engineering teams master AI-powered development with custom tools built for your codebase.

For Vibe Code Projects: Built an app with AI tools like Claude Code, Cursor, or Replit but now facing production challenges? I specialize in Vibe Code Cleanupβ€”transforming rapid prototypes into production-ready applications with proper security, performance, and architecture.

Email matt@heyferrante.com to discuss how I can help your team or project.


Want to stay updated on AI coding tool developments? Subscribe below for future guides and analysis.

Want more like this in your inbox?