Just do it.

Fuck Claude

I think I’ve had a case of Claude psychosis for a little while.

I was delegating all my work to Claude Code.

It gets worse when I feel down/I feel like I don’t have all my brains.

I believe that using LLM tools is all about balance between the amount of active reasoning that you perform VS what you delegate to the AI.

Even there; there is some amount of minimum active reasoning and learning that I need to perform to be productive with LLM tools.

E.g. I need to do manual coding anyway on the side on one project. LLM tools are really good at chores, or retrieving some piece of information from the codebase.

However, it’s actively harmful to try and do everything with them. You need to keep working with the boring details. Those are important.

There are some aspects that are easily delegated, but it’s definitely not large scale work.

There’s a lot of astroturfing with these tools, as well… Rarely, I come accross a HN comment that says that they see a lot of AI promotion from people who don’t even show their code.

… interesting :)

What if that was true, and it was just a big conspiracy? It’s good, but it’s not that good.

It’s really good when I get blocked because I’ve reached the limits of my knowledge. It can usually point me in the right direction.

It’s the way of using AI that feels the most productive to me: Claude Code with specific instructions geared towards educating me rather than doing everything for me.

We already know how to automate tasks: programming.

Why do we need a new, deterministic interface for that?

It’s really cool for little details that don’t matter. Basically, I define the interface, you write that SQL query I really don’t care about, or setting up logging because I know how I want it done and asking you is going to save myself at least 10 minutes of searching the Internet because I kind of forgot.

… now that I think of it, I could just create an Anki card, and I would probably learn a lot.

Which is exactly what I’m doing for some stuff!


So right now, I had to do something. Adjusting resource limits and allocations on my pods in Kubernetes.

I have a dashboard that gives me the list of resources used vs allocated.

(I created it in Claude Code a week or two ago)

It was missing something that I wanted: the average usage of resources.

I just gave the URL of my dashboard to Claude Code, and asked it to make a change. My CLAUDE.md already explains how to do that.

30 seconds later, it was done! That’s definitely something that I’m really happy to let a LLM do for me: fiddling around with Grafana.

Having a LLM creating the dashboards help a lot. What used to take a solid hour now takes 10 minutes and a little bit of prompting.

However, it’s not the “differentiating” work, if that makes sense: it’s just a tool that I use to build what makes my product different, if that makes sense. It’s not good at making my actual product, but it’s good in a few focused areas that allow me to save time to dedicate to improve my product.

Another way to see it is that it gives me bigger leeway; I am creating dashboards that would’ve consumed a whole lot of caffeine in the past; creating custom ClickHouse materialized views, by hand, creating the dashboard in Grafana… Now that is done pretty quick.

Well, the caveat being that it sometimes kind of messes up the SQL queries and that I am sometimes too lazy to read them. You have to read them, sadly. You can’t just let the LLM loose, for now, I’m afraid.


Just use LLMs for the boring bits. You know them. You don’t feel like doing this chore for the 4th time this month.

The goal of AI should be to give us time to reflect and learn more, rather than doing stuff we don’t understand.


I think I finally figured it out.

Use AI only when you know what it’s going, or that you really don’t care.

I know how Kubernetes works… mostly. I can let it wrangle YAML for me.

I know how to build a Grafana dashboard, how to create a ClickHouse materialized view, because I bothered in the past.

Now, thanks to AI, I don’t need to bother. I have a strong mental model of those, and they’re not evolving so fast that I need to put more effort into learning them.

Actually, I would argue that learning a technology that keeps changing is a bad investment, AI or not involved.

So: