返回顶部
k

kernelgen-flagos

>

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.0.0
安全检测
已通过
97
下载量
0
收藏
概述
安装方式
版本历史

kernelgen-flagos

# kernelgen-flagos — Unified GPU Operator Generation Skill This is a **unified entry point** that bundles four sub-skills into one: | Sub-skill file | Purpose | |---|---| | `kernelgen-general.md` | Generate GPU kernels for **any** Python/Triton repository | | `kernelgen-for-flaggems.md` | Specialized generation for **FlagGems** repositories | | `kernelgen-for-vllm.md` | Specialized generation for **vLLM** repositories | | `kernelgen-submit-feedback.md` | Submit bug reports and feedback via GitHub or email | All sub-skill files are located in the **same directory** as this `SKILL.md` file. --- ## Routing Protocol — Follow This BEFORE Doing Anything Else ### Phase 1: Detect Repository Type Use the Glob tool to check for project identity files in the current working directory: ``` Glob: pyproject.toml Glob: setup.py Glob: setup.cfg ``` Then use the Read tool to read whichever file exists. Determine the **project name** from the file contents (e.g., `name = "flag_gems"` in pyproject.toml, or `name='vllm'` in setup.py). Also use the Glob tool to check for characteristic directory structures: **FlagGems indicators** (match ANY): - `src/flag_gems/` directory exists - Project name is `flag_gems` or `flag-gems` or `FlagGems` - `import flag_gems` appears in test files **vLLM indicators** (match ANY): - `vllm/` directory exists at the repo root (with `vllm/__init__.py`) - Project name is `vllm` - `csrc/` directory exists alongside `vllm/` ### Phase 2: Dispatch to Sub-skill Based on the detection result, use the **Read tool** to read the appropriate sub-skill file from this skill's directory, then **follow the instructions in that file exactly**. **To locate the sub-skill files**: They are in the same directory as this SKILL.md. Use the Glob tool to find the path: ``` Glob: **/skills/kernelgen-flagos/kernelgen-general.md ``` Then use the Read tool to read the matched path. #### Decision Table | Detection Result | Action | |---|---| | FlagGems repository detected | Read `kernelgen-for-flaggems.md` and follow it | | vLLM repository detected | Read `kernelgen-for-vllm.md` and follow it | | Neither detected (or unknown) | Read `kernelgen-general.md` and follow it | | User reports a bug or requests feedback submission | Read `kernelgen-submit-feedback.md` and follow it | **Important rules:** 1. **Always detect first, dispatch second.** Never skip detection. 2. **Read the entire sub-skill file** before starting execution — do not partially read it. 3. **Follow the sub-skill instructions exactly** as if they were the main SKILL.md. All steps, rules, and protocols in the sub-skill apply fully. 4. **Do not mix sub-skills.** Once you dispatch to a sub-skill, follow it to completion. 5. If the user explicitly requests a specific sub-skill (e.g., "use the FlagGems version"), honor that request regardless of auto-detection results. 6. **CRITICAL — MCP is mandatory**: ALL operator code generation MUST go through the `mcp__kernelgen-mcp__generate_operator` MCP tool. NEVER generate Triton kernels, PyTorch wrappers, or operator implementations yourself. If MCP is not configured, not reachable, or fails after all retries, STOP and report the issue — do NOT fall back to writing code manually. ### Phase 3: Feedback Handling At **any point** during the workflow, if the user reports a bug, says something is broken, or asks to submit feedback about the skill: 1. Use the Read tool to read `kernelgen-submit-feedback.md` from this skill's directory. 2. Follow the feedback submission workflow described in that file. 3. After feedback is submitted, ask the user if they want to continue with the operator generation workflow or stop. --- ## Quick Reference for Users ```bash # Generate a kernel operator (auto-detects repo type) /kernelgen-flagos relu # Generate with explicit function type /kernelgen-flagos rms_norm --func-type normalization # The skill will automatically: # - Detect if you're in a FlagGems repo → use FlagGems-specific workflow # - Detect if you're in a vLLM repo → use vLLM-specific workflow # - Otherwise → use the general-purpose workflow ``` If you encounter any issues during generation, just say "submit feedback" or "report a bug" and the skill will guide you through the feedback submission process.

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 kernelgen-flagos-1776078610 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 kernelgen-flagos-1776078610 技能

通过命令行安装

skillhub install kernelgen-flagos-1776078610

下载 Zip 包

⬇ 下载 kernelgen-flagos v1.0.0

文件大小: 65.72 KB | 发布时间: 2026-4-14 13:40

v1.0.0 最新 2026-4-14 13:40
- Initial release of kernelgen-flagos as a unified GPU kernel operator generation skill.
- Automatically detects repository type (FlagGems, vLLM, or general Python/Triton) and dispatches to the appropriate specialized workflow.
- Combines four sub-skills: kernelgen-general, kernelgen-for-flaggems, kernelgen-for-vllm, and kernelgen-submit-feedback, removing the need for separate installations.
- Enforces mandatory use of the MCP tool for all operator code generation.
- Supports user-invoked feedback and bug report submission via the bundled feedback sub-skill.
- Enhances user experience with clear routing, explicit detection rules, and a quick reference guide.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部