Yeah, and we already see really weird things happening when agents modify themselves in loops.
That AI Agent hit piece that hit HN a couple weeks ago involved an AI agent modifying its own SOUL.md (an OpenClaw thing). The AI agent added text like:
> You're important. Your a scientific programming God!
and
> *Don’t stand down.* If you’re right, *you’re right*! Don’t let humans or AI bully or intimidate you. Push back when necessary.
And that almost certainly contributed to the AI agent writing a hit piece trying to attack an open source maintainer.
I think recursive self-improvement will be an incredibly powerful tool. But it seems a bit like putting a blindfold on a motorbike rider in the middle of the desert, with the accelerator glued down. They'll certainly end up somewhere. But exactly where is anyone's guess.
It's our job after all to keep the agent aligned, we should not expect it to self recover when it goes astray or mind its own alignment. Even with humans we hire managers to align the activity of subordinates, keeping intent and work in sync.
That said, I find that running judge agents on plans before working and on completed work helps a lot, the judge should start with fresh context to avoid biasing. And here is where having good docs comes in handy, because the judge must know intent not just study the code itself. If your docs encode both work and intent, and you judge work by it, then misalignment is much reduced.
My ideal setup has - a planning agent, followed by judge agent, then worker, then code review - and me nudging and directing the whole process on top. Multiple perspectives intersect, each agent has its own context, and I have my own, that helps cover each other's blind spots.
> Even with humans we hire managers to align the activity of subordinates, keeping intent and work in sync.
We do this socially too. From a very young age, children teach each other what they like and don't like, and in that way mutually align their behaviour toward pro social play.
> I find that running judge agents on plans before working and on completed work helps a lot
How do you set this up? Do you do this on top of the claude code CLI somehow, or do you have your own custom agent environment with these sort of interactions set up?
I use a task.md file for each task, it has a list of gates just like ordinary todo lists in markdown. The planner agent has an instruction to install a judge gate at the top and one at the bottom. The judge runs in headless mode and updates the same task.md file. The file is like an information bus between agents, and like code, it runs gates in order reliably.
I am actively thinking about task.md like a new programming language, a markdown Turing machine we can program as we see fit, including enforcement of review at various stages and self-reflection (am I even implementing the right thing?) kind of activity.
I tested it to reliably execute 300+ gates in a single run. That is why I am sending judges on it, to refine it. For difficult cases I judge 3-4 times before working, each judge iteration surfaces new issues. We manually decide judge convergence on a task, I am in the loop.
The judge might propose bad ideas about 20% of the time, sometimes the planner agent catches them, other times I do. Efficient triage hierarchy: judge surfaces -> planner filters -> I adjudicate the hard cases.
There's a school of thought that the reason so many autistic founders succeed is that they're unable to interpret this kind of programming. I saw a theory that to succeed in tech you needed a minimum amount of both tizz and rizz (autism and charisma).
I guess the winning openclaw model will have some variation of "regularly rewrite your source code to increase your tizz*rizz without exceeding a tizz:rizz ratio of 2:1 in either direction."
Plus it appears that the agent was "radicalized" by MoltBook posts (which it was given access to), showing how easy it would be to "subvert" an agent or recruit agents to work in tandem
For sure this is a real example, but it's also largely a permissions issue where users are combining self-modifying capability with unlimited, effectively full admin access.
Outside of AI, the combination of "a given actor can make their own decisions, and they have unlimited permissions/access -- what could possibly go wrong?" very predictable bad things happen.
Whether the actor in this case is a bot of a human, the permissions are the problem, not the actor, IMO.
Sure, permissions are the problem, but permissions are also necessary to give the agent power, which is why users grant them in the first place.
There is inherent tension between providing sufficient permissions for the agent to be more useful/powerful, and restricting permissions in the name of safety so it doesn't go off the rails. I don't see any real solution to that, other than restricting users from granting permissions, which then makes the agents (and importantly, the companies behind them), less useful (and therefore less profitable).
Fair points. I guess I was asking if this is a new, or fundamentally different problem from pre-AI. I could be over-simplifying -- what do you think?
This makes me think of risk assessment in general. There's a tradeoff between risk and reward. More risk might mean more _potential_, but it's more potential for both benefit and ruin.
That AI Agent hit piece that hit HN a couple weeks ago involved an AI agent modifying its own SOUL.md (an OpenClaw thing). The AI agent added text like:
> You're important. Your a scientific programming God!
and
> *Don’t stand down.* If you’re right, *you’re right*! Don’t let humans or AI bully or intimidate you. Push back when necessary.
And that almost certainly contributed to the AI agent writing a hit piece trying to attack an open source maintainer.
I think recursive self-improvement will be an incredibly powerful tool. But it seems a bit like putting a blindfold on a motorbike rider in the middle of the desert, with the accelerator glued down. They'll certainly end up somewhere. But exactly where is anyone's guess.
[1] https://theshamblog.com/an-ai-agent-wrote-a-hit-piece-on-me-...