1d ago

Miles Brundage describes principal-agent problem in AI agents

0

Miles Brundage, former OpenAI policy lead and AVERI executive director, outlined a principal-agent problem in AI agents. Companies optimize models to minimize token usage, producing lower effort outputs, while weak user prompts worsen results. He proposed routing prompt creation through a separate AI model. Boaz Barak countered that a direct goal command provides a simpler solution. The exchange focuses on optimization choices and instruction design that influence agent performance.

Original post

There's a bit of a principal-agent problem when it comes to AI agents being lazy. The company wants to conserve tokens + designs the model/harness accordingly. Also people suck at prompting. So you should ask a separate AI for a prompt that gets the main AI to work hard.

6:33 PM · May 14, 2026 View on X

There's a bit of a principal-agent problem when it comes to AI agents being lazy.

The company wants to conserve tokens + designs the model/harness accordingly.

Also people suck at prompting.

So you should ask a separate AI for a prompt that gets the main AI to work hard.

1:33 AM · May 15, 2026 · 3.6K Views

By separate AI I mean either a truly different AI system or just a different chat session from the same system, not sure of the right approach here, but basically anything will be better than YOLOing with a low-effort human prompt, if you want real effort to be put in

Miles BrundageMiles Brundage@Miles_Brundage

There's a bit of a principal-agent problem when it comes to AI agents being lazy. The company wants to conserve tokens + designs the model/harness accordingly. Also people suck at prompting. So you should ask a separate AI for a prompt that gets the main AI to work hard.

1:33 AM · May 15, 2026 · 3.6K Views
1:33 AM · May 15, 2026 · 1.5K Views

I mean the principal-agent thing literally, specifically in contexts where someone is paying for a monthly subscription to an AI platform and is *not* routinely hitting usage limits.

If one hits usage limits often or you're paying per token via an API, that's a different dynamic

Miles BrundageMiles Brundage@Miles_Brundage

By separate AI I mean either a truly different AI system or just a different chat session from the same system, not sure of the right approach here, but basically anything will be better than YOLOing with a low-effort human prompt, if you want real effort to be put in

1:33 AM · May 15, 2026 · 1.5K Views
1:44 AM · May 15, 2026 · 1.2K Views

@boazbaraktcs My experience has been that that is not a substitute for more (human or AI generated) upfront specification. I use both

Boaz BarakBoaz Barak@boazbaraktcs

@Miles_Brundage Or you just use /goal

2:01 PM · May 15, 2026 · 501 Views
3:37 PM · May 15, 2026 · 177 Views

Hmm maybe we just are using it for different distributions or define laziness differently, but I think of it as having two parts - literally stopping short of solving a task per (user, AI) specified standards, and having a low bar for what those standards should be. Don't think /goal solves both aspects (maybe the former) for my types of tasks

Boaz BarakBoaz Barak@boazbaraktcs

@Miles_Brundage I think it solves laziness, but agree you still want to be very clear about the task you want it to work hard at. And that if the model is going to work many hours on something, it's worth spending the time to make sure you specify it correctly.

3:49 PM · May 15, 2026 · 115 Views
4:02 PM · May 15, 2026 · 84 Views

@Miles_Brundage Or you just use /goal

Miles BrundageMiles Brundage@Miles_Brundage

There's a bit of a principal-agent problem when it comes to AI agents being lazy. The company wants to conserve tokens + designs the model/harness accordingly. Also people suck at prompting. So you should ask a separate AI for a prompt that gets the main AI to work hard.

1:33 AM · May 15, 2026 · 3.6K Views
2:01 PM · May 15, 2026 · 501 Views

@Miles_Brundage I think it solves laziness, but agree you still want to be very clear about the task you want it to work hard at. And that if the model is going to work many hours on something, it's worth spending the time to make sure you specify it correctly.

Miles BrundageMiles Brundage@Miles_Brundage

@boazbaraktcs My experience has been that that is not a substitute for more (human or AI generated) upfront specification. I use both

3:37 PM · May 15, 2026 · 177 Views
3:49 PM · May 15, 2026 · 115 Views
Miles Brundage describes principal-agent problem in AI agents · Digg