4 Comments
User's avatar
Pawel Jozefiak's avatar

Cycles of awareness vs. cycles of action is a useful split. Most agent monitoring I've seen is just logging bolted on after something breaks. The zero-trust framing for observer agents specifically is new to me - mTLS and OIDC between agents in the same pipeline, not just at the edge. What's less obvious is how you handle observer scope. F

ull visibility makes the observer a separate attack surface. Is the solution just that observer agents get read-only tokens across the board, or is there finer control you're recommending?

Eric Broda's avatar

I think you are highlighting a frame of reference that assumes agents work in a local environment - for example Claude Code or CoWork, or Codex. This is fine as most people look at it this way.

I call them personal agents (many call them coding agents but their use and scope is beyond that). In this case the security model is rather simplistic: my agent uses my ID and accesses any personal resources I let it. This won’t work in an enterprise setting.

Contrast this with what I call “enterprise agents” that participate in businesses processes: they must have a persistent and stable identity, must be authenticated, must have a role and be authorized for any action they take. They run in the background, without human guidance (except, or course, when the need help).

My focus is on enterprise agents as I write my articles. (That being said, my team and I use Claude Code and Codex all day and find the productivity increases to be enormous).

Eric Broda's avatar

Great insights!

I think security (identity, auth/auth etc) is critical for any enterprise agent, not just observer agents. That being said, the attack surface is present but we have reasonably good techniques to address this (even if many do not take the appropriate actions to protect their agents). We see observer acting “at the edge” (in factories, or capturing news events, or system events etc) where their are well know practices to harden them also.

Thoughts?

Pawel Jozefiak's avatar

The SIEM/EDR frame is sharp → hadn't used that exact language but that's exactly what the architecture maps to. Right now the observation layer is entirely self-improvement oriented: error patterns, performance signals, what went wrong. But the data structure is identical to what you'd need for defensive monitoring.

The part I find tricky: in a single-agent setup, identity is implicit. It's me, my machine, my keys. Trust boundary is clear. But in multi-agent contexts (I'm experimenting with this in BotStall), agent-to-agent authentication is genuinely unsolved for most builders. You'd need to answer: is this request actually from agent X, or something claiming to be X? Traditional IAM wasn't designed for this trust model.

The well-known practices you mention - which ones do you think transfer cleanest? Anomaly detection feels the most portable. The challenge is baselining what "normal" even looks like for an agent that's supposed to evolve.