返回顶部
a

agent-guardrails代理护栏

Stop AI agents from secretly bypassing your rules. Mechanical enforcement with git hooks, secret detection, deployment verification, and import registries. Born from real production incidents: server crashes, token leaks, code rewrites. Works with Claude Code, Clawdbot, Cursor. Install once, enforce forever.

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.0.0
安全检测
已通过
744
下载量
免费
免费
0
收藏
概述
安装方式
版本历史

agent-guardrails

Agent Guardrails

Mechanical enforcement for AI agent project standards. Rules in markdown are suggestions. Code hooks are laws.

Quick Start

CODEBLOCK0

This installs the git pre-commit hook, creates a registry template, and copies check scripts into your project.

Enforcement Hierarchy

  1. 1. Code hooks (git pre-commit, pre/post-creation checks) — 100% reliable
  2. Architectural constraints (registries, import enforcement) — 95% reliable
  3. Self-verification loops (agent checks own work) — 80% reliable
  4. Prompt rules (AGENTS.md, system prompts) — 60-70% reliable
  5. Markdown rules — 40-50% reliable, degrades with context length

Tools Provided

Scripts

ScriptWhen to RunWhat It Does
INLINECODE0Once per projectInstalls hooks and scaffolding
INLINECODE1
Before creating new .py files | Lists existing modules/functions to prevent reimplementation | | post-create-validate.sh | After creating/editing .py files | Detects duplicates, missing imports, bypass patterns | | check-secrets.sh | Before commits / on demand | Scans for hardcoded tokens, keys, passwords | | create-deployment-check.sh | When setting up deployment verification | Creates .deployment-check.sh, checklist, and git hook template | | install-skill-feedback-loop.sh | When setting up skill update automation | Creates detection, auto-commit, and git hook for skill updates |

Assets

AssetPurpose
INLINECODE8Ready-to-install git hook blocking bypass patterns and secrets
INLINECODE9
Template __init__.py for project module registries |

References

FileContents
INLINECODE11Research on why code > prompts for enforcement
INLINECODE12
Template AGENTS.md with mechanical enforcement rules | | deployment-verification-guide.md | Full guide on preventing deployment gaps | | skill-update-feedback.md | Meta-enforcement: automatic skill update feedback loop | | SKILL_CN.md | Chinese translation of this document |

Usage Workflow

Setting up a new project

CODEBLOCK1

Before creating any new .py file

CODEBLOCK2

Review the output. If existing functions cover your needs, import them.

After creating/editing a .py file

CODEBLOCK3

Fix any warnings before proceeding.

Setting up deployment verification

CODEBLOCK4

This creates:

  • - .deployment-check.sh - Automated verification script
  • INLINECODE17 - Full deployment workflow
  • INLINECODE18 - Git hook template

Then customize:

  1. 1. Add tests to .deployment-check.sh for your integration points
  2. Document your flow in INLINECODE20
  3. Install the git hook

See references/deployment-verification-guide.md for full guide.

Adding to AGENTS.md

Copy the template from references/agents-md-template.md and adapt to your project.

中文文档 / Chinese Documentation

See references/SKILL_CN.md for the full Chinese translation of this skill.

Common Agent Failure Modes

1. Reimplementation (Bypass Pattern)

Symptom: Agent creates "quick version" instead of importing validated code. Enforcement: pre-create-check.sh + post-create-validate.sh + git hook

2. Hardcoded Secrets

Symptom: Tokens/keys in code instead of env vars. Enforcement: check-secrets.sh + git hook

3. Deployment Gap

Symptom: Built feature but forgot to wire it into production. Users don't receive benefit. Example: Updated notify.py but cron still calls old version. Enforcement: .deployment-check.sh + git hook

This is the hardest to catch because:

  • - Code runs fine when tested manually
  • Agent marks task "done" after writing code
  • Problem only surfaces when user complains

Solution: Mechanical end-to-end verification before allowing "done."

4. Skill Update Gap (META - NEW)

Symptom: Built enforcement improvement in project but forgot to update the skill itself. Example: Created deployment verification for Project A, but other projects don't benefit because skill wasn't updated. Enforcement: install-skill-feedback-loop.sh → automatic detection + semi-automatic commit

This is a meta-failure mode because:

  • - It's about enforcement improvements themselves
  • Without fix: improvements stay siloed
  • With fix: knowledge compounds automatically

Solution: Automatic detection of enforcement improvements with task creation and semi-automatic commits.

Key Principle

Don't add more markdown rules. Add mechanical enforcement.
If an agent keeps bypassing a standard, don't write a stronger rule — write a hook that blocks it.
Corollary: If an agent keeps forgetting integration, don't remind it — make it mechanically verify before commit.

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 agent-guardrails-1776419934 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 agent-guardrails-1776419934 技能

通过命令行安装

skillhub install agent-guardrails-1776419934

下载

⬇ 下载 agent-guardrails v1.0.0(免费)

文件大小: 40.74 KB | 发布时间: 2026-4-17 19:01

v1.0.0 最新 2026-4-17 19:01
agent-guardrails 1.1.0 introduces robust mechanical enforcement tools for AI agent project standards.

- Adds automated git hooks and scripts for code, deployment, and secret enforcement.
- Provides tools for secret detection, import registry creation, and deployment verification.
- Introduces self-verification feedback loops and meta-enforcement for skill updates.
- Supplies detailed documentation, templates, and workflow guides (English and Chinese).
- Supports Claude Code, Clawdbot, Cursor, and any AI agent projects.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部