BR Risk Analyzer Skill
Overview
This skill analyzes code changes between commits against requirement documents to identify and prioritize risk points following the established code review protocol.
Workflow Implementation
Step 1: Input Digestion
- - Extract from requirements: functional goals, non-functional requirements (performance/security), boundary conditions, prohibited behaviors, dependent systems
- Identify key terms as search keywords: entities, state machines, configuration items, message topics, external interfaces
Step 2: Code Scope Definition
- - Use semantic search/grep/glob to locate: entry points (Controllers/timers/consumers), core Services, persistence layers, message handling, configuration reading
- Map data flow (who writes/reads: DB/Redis/MQ/files) and control flow (sync/async/retry patterns)
Step 3: Requirement-Driven Code Review
For each requirement aspect, verify against code:
| Verification Dimension | Key Questions |
|---|
| Correctness | Branch coverage, safe defaults, enum/state consistency |
| Boundaries |
Null handling, large datasets, timeouts, duplicate submissions, idempotency |
| Concurrency | Locking, transaction boundaries, visibility, race conditions |
| Failure Paths | Exception swallowing, rollback capability, retry logic, partial failure handling |
| Configuration & Switches | Behavior when config missing, dangerous switch combinations |
| Security | Authorization, privilege escalation, injection vulnerabilities, sensitive data logging |
| Dependencies | External call failures, degradation strategies, circuit breaking, timeouts |
| Compatibility | Legacy data handling, old API support, grayscale deployment and rollback |
Step 4: Risk Classification & Output
Follow strict priority grading:
P0 (Must Fix):
- - Financial/data errors, security vulnerabilities, widespread outages, irreversible data corruption
P1 (Fix This Iteration):
- - Functionality errors under specific conditions, severe performance degradation, monitoring blind spots amplifying failures
P2/P3 (Optional):
- - Maintainability issues, edge case UX problems, low-probability exceptions, style/comment improvements
Step 5: Knowledge Persistence
- - Store analysis results and project understanding in INLINECODE0
- Update accumulated knowledge for future risk assessments
- Maintain historical context of requirement interpretations and codebase evolution
Usage Protocol
Input Requirements
Provide in single message:
- 1. Requirement/Design Document Summary (or PRD highlights, change notes, interface contracts)
- Scope (repository paths, modules, branches, related issue/ticket numbers)
- Expected Output (risk list only / risks + test cases / with priority and fix recommendations)
Execution Guarantees
- - Requirement-first approach: Use requirements to drive code examination, not random file scanning
- Evidence-based: Each risk includes file path + class/method + behavior description; mark speculation as "needs confirmation"
- Layered risk analysis: Interface contracts, concurrency/consistency, exception handling, configuration/data, security/compliance, performance/resources, observability, compatibility/rollback
- Requirement alignment: Explicitly categorize as "covered by requirements" / "not mentioned in requirements but potential issue" / "outside current scope"
Output Template
Results follow this mandatory structure:
CODEBLOCK0
The results is saved in {requirements name}-risk-analyzer.md
Quick Checklist Integration
During review, systematically verify:
- - [ ] All entry points have proper authorization/parameter validation (when required)?
- [ ] Database writes and message sending order prevent inconsistency? Need transactions or compensation?
- [ ] Async thread pools / MQ consumption failures cause data loss or duplication?
- [ ] Behavior is defined when config is empty, parsing fails, or dependent services timeout?
- [ ] Logs contain sensitive data (keys, IDs, full request bodies)?
- [ ] Large files/batches could cause OOM or thread pool exhaustion?
- [ ] State machine transitions handle illegal states properly?
- [ ] Core branches have unit/contract tests?
Testing Guidance
- - P0/P1 risks: Provide specific test scenarios with preconditions, key steps, expected results
- Test classification: Indicate suitability for unit tests / integration tests / manual regression
- Testing complements but doesn't replace code review: Test suggestions validate high-risk findings, not substitute logical analysis
BR 风险分析器技能
概述
本技能分析提交之间针对需求文档的代码变更,以识别并优先处理遵循既定代码审查协议的风险点。
工作流实现
步骤1:输入消化
- - 从需求中提取:功能目标、非功能需求(性能/安全)、边界条件、禁止行为、依赖系统
- 识别关键术语作为搜索关键词:实体、状态机、配置项、消息主题、外部接口
步骤2:代码范围定义
- - 使用语义搜索/grep/glob定位:入口点(控制器/定时器/消费者)、核心服务、持久层、消息处理、配置读取
- 映射数据流(谁写入/读取:数据库/Redis/消息队列/文件)和控制流(同步/异步/重试模式)
步骤3:需求驱动的代码审查
针对每个需求维度,对照代码进行验证:
| 验证维度 | 关键问题 |
|---|
| 正确性 | 分支覆盖、安全默认值、枚举/状态一致性 |
| 边界 |
空值处理、大数据集、超时、重复提交、幂等性 |
| 并发 | 锁机制、事务边界、可见性、竞态条件 |
| 失败路径 | 异常吞没、回滚能力、重试逻辑、部分失败处理 |
| 配置与开关 | 配置缺失时的行为、危险开关组合 |
| 安全 | 授权、权限提升、注入漏洞、敏感数据日志记录 |
| 依赖 | 外部调用失败、降级策略、熔断、超时 |
| 兼容性 | 遗留数据处理、旧API支持、灰度部署与回滚 |
步骤4:风险分类与输出
遵循严格的优先级分级:
P0(必须修复):
- - 财务/数据错误、安全漏洞、大规模故障、不可逆数据损坏
P1(本轮迭代修复):
- - 特定条件下的功能错误、严重性能下降、放大故障的监控盲区
P2/P3(可选):
- - 可维护性问题、边缘用例用户体验问题、低概率异常、样式/注释改进
步骤5:知识持久化
- - 将分析结果和项目理解存储在resources/project-understanding.md中
- 更新积累的知识以备未来风险评估
- 维护需求解释和代码库演进的历史上下文
使用协议
输入要求
在单条消息中提供:
- 1. 需求/设计文档摘要(或PRD要点、变更说明、接口契约)
- 范围(仓库路径、模块、分支、相关工单/任务编号)
- 预期输出(仅风险列表 / 风险加测试用例 / 含优先级和修复建议)
执行保证
- - 需求优先方法:用需求驱动代码检查,而非随机文件扫描
- 基于证据:每个风险包含文件路径 + 类/方法 + 行为描述;推测标记为需确认
- 分层风险分析:接口契约、并发/一致性、异常处理、配置/数据、安全/合规、性能/资源、可观测性、兼容性/回滚
- 需求对齐:明确归类为需求覆盖/需求未提及但潜在问题/当前范围外
输出模板
结果遵循以下强制结构:
markdown
审查摘要
- - 需求要点:(1-3句话)
- 代码范围:(模块/路径列表)
- 概览:P0 x项 / P1 x项 / P2 x项 / P3 x项
风险清单
P0(必须处理)
| 编号 | 风险描述 | 位置(文件:类/方法) | 触发条件/影响 | 建议(可选) |
|---|
| R1 | ... | ... | ... | ... |
P1(建议本轮修复)
| 编号 | 风险描述 | 位置 | 触发条件/影响 | 建议 |
|---|
| ... | ... | ... | ... | ... |
P2 / P3(酌情处理)
需求覆盖评估
- - 已覆盖:...
- 需求未明确覆盖但代码中存在:...
- 本次审查范围外:...
测试建议(可选)
| 风险编号 | 测试类型 | 场景 | 预期结果 |
|---|
| R1 | 集成测试 | ... | ... |
结果保存在{需求名称}-risk-analyzer.md中
快速检查清单集成
审查期间,系统性地验证:
- - [ ] 所有入口点是否有适当的授权/参数验证(如需要)?
- [ ] 数据库写入和消息发送顺序是否防止不一致?需要事务或补偿?
- [ ] 异步线程池/消息队列消费失败是否导致数据丢失或重复?
- [ ] 配置为空、解析失败或依赖服务超时时行为是否已定义?
- [ ] 日志是否包含敏感数据(密钥、ID、完整请求体)?
- [ ] 大文件/批量操作是否可能导致内存溢出或线程池耗尽?
- [ ] 状态机转换是否正确处理非法状态?
- [ ] 核心分支是否有单元/契约测试?
测试指导
- - P0/P1风险:提供具体测试场景,包括前置条件、关键步骤、预期结果
- 测试分类:指明适合单元测试/集成测试/手动回归测试
- 测试补充但不替代代码审查:测试建议用于验证高风险发现,而非替代逻辑分析