返回顶部
a

agent-memory-tools

Searches, stores, and manages agent memory across 4 sources (fact store, vector embeddings, BM25, knowledge graph). Runs 100% local via Ollama — no API keys, no cloud dependency. Use when searching workspace knowledge, extracting facts from text, detecting contradictions, auto-ingesting file changes, or building entity graphs. Triggers on memory recall, fact extraction, knowledge search, workspace indexing.

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.0.0
安全检测
已通过
80
下载量
1
收藏
概述
安装方式
版本历史

agent-memory-tools

# Agent Memory Tools Multi-source memory recall and fact management. Runs locally via Ollama (0€). ## Architecture ``` Question → unified_recall.py → fan-out 4 sources → merge → score → rerank → answer ├─ Fact store (Convex or local JSON) ├─ Vector embeddings (nomic) ├─ BM25 full-text (QMD) └─ Knowledge graph (JSON) File changed → auto_ingest.py → extract facts → contradiction check → store → update embeddings → rebuild graph ``` ## Setup ```bash # Install Ollama models (one-time) ollama pull gemma3:4b # LLM (~2s/call) ollama pull nomic-embed-text-v2-moe # Embeddings # Verify everything works python3 scripts/selftest.py ``` Requirements: Python 3.9+, Ollama, `curl`. Optional: QMD CLI (`bun install -g qmd`). ## Core Scripts ### Search memory ```bash # Unified recall — recommended (all 4 sources, scored + reranked) python3 scripts/unified_recall.py "What bugs happened last week?" --debug # Multi-hop reasoning (chains searches with LLM synthesis) python3 scripts/multihop_search.py "How does the deploy pipeline work?" --embed # Temporal decay (recent facts score higher, errors protected) python3 scripts/decay_search.py "recent issues" --half-life 14 ``` ### Extract and store facts ```bash # Extract from text python3 scripts/extract_facts.py "Some conversation or document" --store --debug # Extract from file python3 scripts/extract_facts.py --file path/to/doc.md --store # Pipe from stdin cat summary.md | python3 scripts/extract_facts.py --store ``` Facts are checked for contradictions locally (gemma3, ~2s) before storage. Categories: `knowledge`, `error`, `timeline`, `preference`, `tool`, `client`, `hr`. ### Auto-ingest workspace changes ```bash python3 scripts/auto_ingest.py --scan # One-shot: process modified .md files python3 scripts/auto_ingest.py --watch # Daemon: poll for changes every 30s python3 scripts/auto_ingest.py --file doc.md # Single file ``` Dedup by content hash + 5 min cooldown. Triggers: fact extraction → storage → embed cache update → graph rebuild. ### Build knowledge graph ```bash python3 scripts/knowledge_graph.py # Full rebuild python3 scripts/knowledge_graph.py --dry-run # Preview without writing ``` Graph stored at `.cache/knowledge-graph.json`. Auto-rebuilt incrementally by `auto_ingest.py`. ### Run tests ```bash python3 scripts/tests.py # 28 unit tests ``` ## Configuration Edit `scripts/config.json`. See `references/configuration.md` for full guide. **Storage backend** — auto-detected: - `convexUrl` set → uses Convex (agentMemory API) - No `convexUrl` → uses local `.cache/agent-facts.json` **Model presets** — switch LLM/embeddings provider in one flag: ```bash python3 scripts/unified_recall.py "query" --preset ollama # Default python3 scripts/unified_recall.py "query" --preset lmstudio python3 scripts/unified_recall.py "query" --preset openai ``` **Per-script model override** — in `config.json` → `scriptOverrides`: ```json "scriptOverrides": { "recall": { "llm": { "model": "gemma3:4b", "apiFormat": "ollama" } }, "extract": { "llm": { "model": "gemma3:4b", "apiFormat": "ollama" } } } ``` **Recommended models by RAM:** | RAM | LLM | Embeddings | |-----|-----|------------| | 4 GB | gemma3:1b | nomic-embed-text | | **8 GB** | **gemma3:4b** ✓ | nomic-embed-text-v2-moe | | 16+ GB | qwen3.5:27b | nomic-embed-text-v2-moe | ⚠ Avoid Qwen 3.5 for JSON tasks — outputs to "thinking" field instead of response. ## Platform auto-trigger | Platform | Method | |----------|--------| | macOS | LaunchAgent with WatchPaths | | Linux | systemd timer or cron | | Windows | Task Scheduler | See `references/configuration.md` for examples. ## File Structure ``` scripts/ ├── unified_recall.py # Multi-source search + scoring + synthesis ├── extract_facts.py # Fact extraction + contradiction check + storage ├── auto_ingest.py # File watcher / scanner pipeline ├── multihop_search.py # Chained reasoning search ├── decay_search.py # Time-weighted search ├── knowledge_graph.py # Entity/relationship graph builder ├── fact_store.py # Storage abstraction (Convex / local JSON) ├── llm_client.py # LLM/embedding client (Ollama/LM Studio/OpenAI) ├── selftest.py # Setup validation ├── tests.py # Unit tests (28) └── config.json # Configuration + presets references/ └── configuration.md # Full configuration guide ```

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 agent-memory-tools-1776062714 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 agent-memory-tools-1776062714 技能

通过命令行安装

skillhub install agent-memory-tools-1776062714

下载 Zip 包

⬇ 下载 agent-memory-tools v1.0.0

文件大小: 47.43 KB | 发布时间: 2026-4-14 14:37

v1.0.0 最新 2026-4-14 14:37
agent-memory-tools 1.0.0 — first release

- Search, store, and manage workspace knowledge across four local sources: fact store (Convex/JSON), vector embeddings, BM25 full-text, and knowledge graph.
- 100% local operation via Ollama; no API keys or cloud dependency required.
- Includes unified search, multi-hop reasoning, fact extraction with contradiction checks, auto-ingest from file changes, and knowledge graph building.
- Modular Python scripts for recall, extraction, ingestion, time-weighted search, and entity graph management.
- Flexible configuration for storage backend, LLM, and embeddings provider.
- Designed for privacy, extensibility, and easy local setup.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部