Based on claude-code-tools TmuxCLIController, this refactor: - Added DockerTmuxController class for robust tmux session management - Implements send_keys() with configurable delay_enter - Implements capture_pane() for output retrieval - Implements wait_for_prompt() for pattern-based completion detection - Implements wait_for_idle() for content-hash-based idle detection - Implements wait_for_shell_prompt() for shell prompt detection Also includes workflow improvements: - Pre-task git snapshot before agent execution - Post-task commit protocol in agent guidelines Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
5.5 KiB
5.5 KiB
Skill Learning System - Quick Start
TL;DR
The skill learning system automatically learns from successful tasks and QA passes, storing learnings in the knowledge graph to improve future recommendations.
Enable it in one line:
python3 lib/qa_validator.py --learn --sync --verbose
How It Works
- Task Completes → QA validation passes
- System Analyzes → Extracts skills used, patterns, tools
- Learning Created → Stores in knowledge graph with metadata
- Future Tasks → System recommends relevant skills based on prompt
Basic Usage
Run QA with Learning Extraction
# Standard QA validation only
python3 lib/qa_validator.py --sync --verbose
# With learning extraction enabled
python3 lib/qa_validator.py --learn --sync --verbose
Extract Learnings from Completed Task
from lib.skill_learning_engine import SkillLearningSystem
system = SkillLearningSystem()
task_data = {
"task_id": "my_task",
"prompt": "Refactor authentication module",
"project": "overbits",
"status": "success",
"tools_used": ["Bash", "Read", "Edit"],
"duration": 45.2,
"result_summary": "Successfully refactored",
"qa_passed": True,
"timestamp": "2026-01-09T12:00:00"
}
qa_results = {
"passed": True,
"results": {"syntax": True, "routes": True},
"summary": {"errors": 0}
}
result = system.process_task_completion(task_data, qa_results)
print(f"Learning ID: {result['learning_id']}")
Get Recommendations for a Task
system = SkillLearningSystem()
recommendations = system.get_recommendations(
task_prompt="Debug authentication issue",
project="overbits"
)
for rec in recommendations:
print(f"{rec['skill']}: {rec['confidence']:.0%} confidence")
View Learned Skills Profile
profile = system.get_learning_summary()
print(f"Total learnings: {profile['total_learnings']}")
print(f"Top skills: {profile['top_skills']}")
What Gets Learned
The system extracts and learns:
Tool Usage
- Which tools are used for which tasks
- Tool frequency and patterns
- Tool combinations that work well together
Decision Patterns
- Optimization: Performance improvement approaches
- Debugging: Error diagnosis and fixing strategies
- Testing: Validation and verification techniques
- Refactoring: Code improvement methods
- Documentation: Documentation practices
- Integration: System integration approaches
- Automation: Automation and scheduling patterns
Project Knowledge
- Which projects benefit from which approaches
- Project-specific tool combinations
- Project patterns and best practices
Quality Metrics
- Success rates by tool combination
- Task completion times
- QA pass rates by category
Storage
All learnings stored in the research knowledge graph:
/etc/luz-knowledge/research.db
Query learnings:
python3 lib/knowledge_graph.py search "optimization"
python3 lib/knowledge_graph.py list research finding
Examples
Example 1: Learn from Database Optimization
# Task completes successfully with QA passing
python3 lib/qa_validator.py --learn --sync
# System automatically:
# - Identifies tools used: Bash, Read, Edit
# - Recognizes pattern: optimization
# - Stores learning about database optimization
# - Creates relations between tools and pattern
Example 2: Get Recommendations
# Later, for similar task:
recommendations = system.get_recommendations(
"Optimize API endpoint performance",
project="overbits"
)
# Might suggest:
# - Use Bash for performance analysis
# - Use Edit for code changes
# - Watch for optimization patterns
# - Similar to previous successful tasks
Example 3: Build Team Knowledge
Run multiple tasks with learning enabled:
# Day 1: Deploy task with --learn
python3 lib/qa_validator.py --learn --sync
# Day 2: Optimization task with --learn
python3 lib/qa_validator.py --learn --sync
# Day 3: Similar deployment task
# System now has learnings from both previous tasks
recommendations = system.get_recommendations("Deploy new version")
Statistics and Monitoring
View learning system statistics:
python3 lib/qa_learning_integration.py --stats
Output:
=== QA Learning Integration Statistics ===
total_events: 42
qa_passed: 40
learnings_extracted: 38
extraction_rate: 0.95
last_event: 2026-01-09T12:00:00
Testing
Quick test of the system:
python3 lib/skill_learning_engine.py test
Full test suite:
python3 -m pytest tests/test_skill_learning.py -v
Troubleshooting
"No learnings extracted"
- Check that QA actually passed
- Verify knowledge graph is accessible
- Run with
--verboseto see details
"Empty recommendations"
- Need to complete tasks with
--learnfirst - Task prompt must match learning keywords
- Check knowledge graph has entries:
python3 lib/knowledge_graph.py list research finding
"Permission denied"
- Check
/etc/luz-knowledge/permissions - Ensure user is in
ai-usersgroup - Check knowledge graph domain permissions
Next Steps
- Start collecting learnings: Run tasks with
--learn - Monitor learnings: Check statistics and knowledge graph
- Use recommendations: Integrate into task routing
- Refine patterns: Add custom extraction patterns as needed
Learn More
- Full documentation: SKILL_LEARNING_SYSTEM.md
- Source code:
lib/skill_learning_engine.py - Integration:
lib/qa_learning_integration.py - Tests:
tests/test_skill_learning.py