I think most devs by now have written and shipped code with the help of an LLM. Personally I use it for almost every pull request I submit at this point.
However, writing code is only a small part of our job. Why can't LLM's help with the rest of our day-to-day? In this post I'll share some other uses I've found helpful.
Spruce up your dev environment
Setting up a proper dev environment can be painfully boring and never feel like a priority. Once you get it in a half-way usable state it can go untouched for months. Nobody (at least I don't) likes configuring linters, writing automation scripts, Dockerizing, cleaning up your package.json
, or figuring out the Makefile syntax.
Luckily LLMs don't care and will happily do that for you! I've gone through basically all of my projects and did a couple passes through to make my development experience easier and more efficient. For my personal blog that you're reading right now, I even had it convert from yarn
to pnpm
which is something I'd put off for a long time out of laziness.
Improving your dev environment has a nice side effect of making the LLM work better with your project as well.
Learn a new codebase
With agentic LLMs like Claude Code it's super easy to learn a new codebase. Simply ask it to research the codebase and tell you the most important parts you should be aware of. It will happily chug along and give you a report. The most useful part of this is being able to ask follow-up questions, like a conversational REPL.
While it's good at a general report, asking specific questions will yield better results. For example, I've been onboarding to the backend repository at my new job. Rather than asking it "Can you explain this codebase to me?" which would return a bunch of generic "This is a Django project...", I've found much better results asking "I'm trying to add a new backend API to display information about X widget. Can you show me a modern example of what changes I need? Check the Git history if you need to." This is extremely useful for large codebases since they tend to be in "migration hell" with many ways to achieve the same thing. This gives the LLM a targeted goal to reach.
I've also found it fun to check out a random repo from Github and learn something new.
Research design docs
Much of my time is spent reviewing and writing design docs. LLMs are pretty good at the reviewing aspect and I've made some design alterations based on their feedback. But where they really shine is collecting data and research while writing a design doc. It's great (and much more persuasive) being able to easily add examples and actual data to support your decisions.
For example, I was trying to determine whether I needed to add a new table and API for a change. Normally this would take me a while to manually grep the codebase and to get a big picture of the existing schemas. This is easy work for an LLM. In the end I needed to create both a new table and API, although it found several related APIs that I was able to model my design after.
Local code reviews
My review workflow has completely changed over the past few months. Now it goes like:
- Self-review locally with Claude and fix what it finds
- Open a pull request and wait for the review bots to finish (Cursor BugBot, Graphite Diamond, Sentry Seer), fix whatever they find
- Get a human reviewer
The LLM reviewers have found a lot of hard to spot bugs so far and it's nice knowing I have multiple layers of review. The human reviewer at this point is more to check the requirements and UX.
Write test cases
I guess this is largely code related but worth calling out. There's basically no excuse these days to not have a comprehensive test suite for whatever feature you are building.
I have noticed that test cases is where I often find the most amount of slop code, so be careful and give detailed instructions. For example LLMs love to mock and patch out code which I'm pretty strongly against. I try to list the test cases I care about upfront rather than just say "go write some tests".
Accessiblity and localization
Similarly for frontend code, it's trivial now to have proper accessible and localized UI that's following the latest ARIA standards. When building a frontend feature my flow tends to be:
- Get everything working how I like in one big file
- Write simple rendering tests
- Refactor the big file into smaller components if needed
- Do an accessibility/localization pass through everything
And all of those steps are with the help of an LLM. I get that making your website accessible to screenreaders and such is often an afterthought, hopefully this will get more folks on board!
I also recommend looking into automation MCPs such as Playwright to help out with this step. Sometimes it's easier for the LLM to work backwards from that rather than infer how your DOM ends up being displayed.
Create multiple variations
In a perfect world, the best way to solve an open-ended question is to try out as many variations of the solution as possible, pick the ones you like, and iterate until you find exactly what you're looking for. Unfortunately our reality is that we are limited by time and money and don't often have this luxury.
With LLMs this is becoming more feasible. Using Git worktrees you can let multiple agents in parallel work on the same problem in different ways, and then pick the winner. Writing code is getting faster and cheaper than ever before.