>Good question, it's basically entirely hand-written (with tab autocomplete). I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful, possibly the repo is too far off the data distribution.
And a lot of the tooling he mentioned in OP seems like self-imposed unnecessarily complexity/churn. For the longest time you could say the same about frontend, that you're so behind if you're not adopting {tailwind, react, nodejs, angular, svelte, vue}.
At the end of the day, for the things that an LLM does well, you can achieve roughly the same quality of results by "manually" pasting in relevant code context and asking your question. In cases where this doesn't work, I'm not convinced that wrapping it in an agentic harness will give you that much better results.
Most bespoke agent harnesses are obsoleted by the time of the next model release anyway, the two paradigms that seem to reliably work are "manual" LLM invocation and LLM with access to CLI.
Exactly! If people have 'never felt this far behind' and the LLM's are that good. Ask the LLM to teach you.
Like so many articles on 'prompt engineers' this (never felt this behind) take too is laughable. Programmers having learnt how to program (writing algorithms, understanding data structures, reading source code and API docs) are now completely incapable of using a text box to input prompts? Nor can they learn how to quickly enough! And it's somehow more difficult than what they have routinely been doing? LOL