I tried every AI coding assistant. Here's what actually stuck.

I've been deep in the AI coding assistant rabbit hole for the past year. Cursor, Windsurf, GitHub Copilot, Claude Code, Codex — I've tried them all. And I have opinions.

I tried every AI coding assistant. Here's what actually stuck.

I've been deep in the AI coding assistant rabbit hole for the past year. Cursor, Windsurf, GitHub Copilot, Claude Code, Codex — I've tried them all. And I have opinions.

Fair warning: this is my experience as a Laravel developer. Your mileage will vary. The best tool is the one that fits your brain. This one fits mine.

The tools I actually used

Let me be clear about what I'm reviewing here. I'm not ranking tools I've seen demos of. These are tools I've shipped production code with:

  • Cursor — The VS Code fork everyone's talking about
  • Windsurf — Codeium's IDE with "Cascade" flows
  • Claude Code — Anthropic's CLI agent
  • OpenAI Codex — OpenAI's coding agent

I've also used GitHub Copilot, but that's table stakes at this point. Everyone has it. It's fine. Moving on.

The IDE problem

Here's what nobody talks about: all these tools want you to switch IDEs.

Cursor? It's VS Code. Windsurf? Their own thing. Even JetBrains is building AI features that only work in their ecosystem.

I've been using JetBrains for years. PHPStorm for Laravel, WebStorm for React. The refactoring tools, the database integration, the Laravel Idea plugin — I'm not giving that up for a chat sidebar.

And that's the trap. You try Cursor because the AI is better. Then you miss your PHPStorm shortcuts. So you try to customize Cursor. Then you're maintaining two IDE configs. Then Windsurf releases something cool. Now you have three.

I got off that ride.

Why I landed on Claude Code

Claude Code is different. It's a CLI tool. It doesn't care what IDE you use.

My setup is dead simple:

  • PHPStorm for writing and navigating code
  • Terminal with Claude Code for AI assistance
  • That's it

No IDE switching. No plugin conflicts. No "which AI is active in which editor" confusion.

What makes it work

It reads your entire codebase. Not just the open file. When I ask it to add a feature, it understands my folder structure, my naming conventions, my existing patterns. It doesn't suggest code that looks foreign to the project.

Planning mode is genuinely useful. When I describe a feature, it creates a plan, identifies which files need changes, and asks for confirmation before touching anything. For complex refactors, this is invaluable.

Skills are a game-changer. You can teach Claude Code about your stack. I have skills for Pest testing, Wayfinder routes, even project-specific conventions. It's like having a junior dev who actually read the docs.

It stays in its lane. It makes changes, runs tests, fixes what broke. Then it stops and shows me what it did. I'm still in control.

What didn't work for me

Cursor

Beautiful product. The inline diffs are nice. But I kept fighting the IDE. It's VS Code, and I'm not a VS Code person. The AI features weren't enough to overcome the muscle memory tax.

Windsurf

Cascade flows are a cool concept — chain multiple AI actions together. In practice, I found it unpredictable. Sometimes brilliant, sometimes it would go off on tangents. The "let the AI drive" philosophy doesn't match how I work.

Multi-model switching

Both Cursor and Windsurf let you switch between Claude, GPT-4, and other models. Sounds great in theory. In practice, every model has different strengths and context window behavior. Switching mid-task often meant starting over mentally. I stopped doing it.

OpenAI Codex

I wanted to like this. It's slow. The output quality wasn't there for me — too generic, didn't pick up on project patterns. But honestly? Plenty of people swear by it. I guess AI tools are personal.

The uncomfortable truth

Here's what I've learned: the "best" AI coding tool doesn't exist.

  • Some people thrive with Cursor's visual approach
  • Some people need Copilot's ghost text in their editor
  • Some people want AI to drive while they supervise
  • I want a capable assistant I can summon when needed

The discourse online is brutal. Everyone's fighting about which tool is best. Meanwhile, the actual answer is: use what makes you productive and ignore the noise.

The agentic loop: agents solving backlogs

Here's where it gets interesting. Claude Code isn't just for interactive sessions. You can run it autonomously.

The concept is simple: point an agent at your backlog, let it pick up issues, solve them, create PRs, and move on to the next one. A loop of agents working through your bugs and features while you sleep.

I've experimented with this using Claude Code's headless mode and GitHub Actions. The flow:

  1. Agent picks an issue labeled "ai-ready" from GitHub
  2. Reads the issue, understands the context
  3. Creates a branch, makes changes, runs tests
  4. Opens a PR with a summary of what it did
  5. Moves to the next issue

Does it work? Sometimes brilliantly. Sometimes it goes in circles. The key is issue quality — vague tickets produce vague solutions.

What works well for autonomous agents

  • Bug fixes with clear reproduction steps
  • Adding tests for existing functionality
  • Refactoring with explicit rules ("extract this to a service class")
  • Documentation updates
  • Dependency upgrades with test suites

What still needs human oversight

  • Features with ambiguous requirements
  • Anything touching authentication or payments
  • Performance optimizations (agents love premature optimization)
  • UI/UX decisions

The honest take

Agentic coding is real, but it's not "set and forget." Think of it as a junior dev working night shifts. They'll get stuff done, but you're reviewing every PR in the morning.

The tooling is evolving fast. Claude Code's headless mode, OpenAI's Codex with task queues, even custom setups with LangChain — there are multiple ways to build this. None are turnkey yet.

But if you have a clean codebase, good tests, and well-written issues? You can genuinely wake up to solved tickets. That's not hype. I've done it.

My current workflow

For a typical Laravel feature:

  1. Think through the approach in my head
  2. Open Claude Code: "Add [feature]. Here's what I'm thinking..."
  3. Review its plan, adjust if needed
  4. Let it make the changes
  5. Review the diff in PHPStorm
  6. Run tests, fix edge cases
  7. Commit

Should you switch?

If you're happy with your current setup: no.

If you're frustrated with IDE lock-in, or you want more control over AI interactions, or you're already comfortable in the terminal: give Claude Code a shot.

It's not magic. No AI tool is. But it's the first one that felt like it fit into my workflow instead of demanding I change it.


I'm building several SaaS products with this setup. Follow along at artisancraft.dev for more on the tools and decisions behind them.