15/03/2026
What I use Claude Code for — and what I don't
An honest report from daily use of Claude Code. What works, what doesn't, and why the tool multiplies taste — in both directions.

I run Claude Code daily as part of my actual work, not as a novelty. Here's the honest report.
What I use it for, regularly:
Sanity content patches. This site runs on Sanity CMS, and content updates that would have been twenty manual clicks become a five-line script I describe in plain Norwegian and Claude writes. Patching the founder bio across two locales without breaking the existing field structure: ten minutes including review.
KB ingest plumbing. Markdown files in a folder, embedded into a Postgres pgvector store. The first version of the chunker took an hour with Claude pair-driving versus a half-day reading docs. The second version added quote-stripping and a `language` alias for `locale` in the frontmatter parser when an external content set arrived in a different format. Twenty minutes.
Throwaway scripts where writing them by hand isn't worth it. Inspector scripts to dump current Sanity state. One-shot patches when I'm debugging. Migration scripts that move three paragraphs from one field to another. None of these would exist if I had to hand-write them. Most get committed to the repo and forgotten, which is fine.
Reading other people's code when I'm new to a library. Better than the docs in most cases — faster, answers the actual question, points to the source file when I need to verify.
What I don't use it for:
Designing the architecture. Claude Code is a junior engineer at best on structural decisions. It will write what you tell it to write. If what you tell it to write is fundamentally wrong, you get a confidently-wrong implementation. The decision about what to build comes from me. Always.
Debugging when I haven't read the code first. The pattern that goes wrong every time: I see an error, paste it into Claude, watch it generate a plausible fix, apply it, find the bug isn't actually fixed. The fix that works comes from understanding what's broken. I now read the code first, every time.
Anything where the failure mode is silent. AI coding tools are best when you can verify the output quickly — does the test pass, does the build run, does the page render. They're worst when the output looks fine but is subtly wrong (auth flows, payment logic, anything where "ran successfully" doesn't mean "did the right thing"). For those I write the first version myself.
The pattern, if there is one: Claude Code multiplies your taste. If you have good taste in code, it makes you faster. If you don't, it gives you more bad code, faster.
That's also the right answer for "should we use AI to write code at our company": it depends on whether you have a senior engineer who can drive it. Without one, you're shipping confident-looking technical debt at a higher rate than before.

Roger Agerup
Founder and AI advisor