newsence
來源篩選

Ask HN: How are you enforcing permissions for AI agent tool calls in production?

Hacker News

This Hacker News discussion seeks practical insights on how developers are implementing and enforcing security permissions for AI agents making tool calls in production environments, addressing challenges like bypass prevention, identity management, and failure modes.

newsence

Ask HN:您如何在生產環境中強制執行 AI 代理工具調用的權限?

Hacker News
大約 1 個月前

AI 生成摘要

此 Hacker News 討論旨在探討開發者如何在生產環境中實施和強制執行 AI 代理工具調用的安全權限,並解決繞過、身份管理和故障模式等挑戰。

Ask HN: How are you enforcing permissions for AI agent tool calls in production? | Hacker News

Image

My question: in a real production environment, what’s your enforcement point that the agent cannot bypass?
Like, what actually guarantees the tool call isn’t executed unless it passes policy?

Some specific things I’m curious about:

Are you enforcing permissions inside each tool wrapper, at a gateway/proxy, or via centralized policy service?

How do you handle identity + authorization when agents act on behalf of users?

Do you log decisions separately from execution logs (so you can answer “why was this allowed?” later)?

How do you roll out enforcement safely (audit-only/shadow mode -> enforcement)?

What failure modes hurt most like policy bugs, agent hallucinations, prompt injection, or tool misuse?

Would love to hear how people are doing this in practice (platform/security/infra teams especially)

Image