Based on claude-code-tools TmuxCLIController, this refactor: - Added DockerTmuxController class for robust tmux session management - Implements send_keys() with configurable delay_enter - Implements capture_pane() for output retrieval - Implements wait_for_prompt() for pattern-based completion detection - Implements wait_for_idle() for content-hash-based idle detection - Implements wait_for_shell_prompt() for shell prompt detection Also includes workflow improvements: - Pre-task git snapshot before agent execution - Post-task commit protocol in agent guidelines Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
236 lines
5.5 KiB
Markdown
236 lines
5.5 KiB
Markdown
# Skill Learning System - Quick Start
|
|
|
|
## TL;DR
|
|
|
|
The skill learning system automatically learns from successful tasks and QA passes, storing learnings in the knowledge graph to improve future recommendations.
|
|
|
|
**Enable it in one line:**
|
|
```bash
|
|
python3 lib/qa_validator.py --learn --sync --verbose
|
|
```
|
|
|
|
## How It Works
|
|
|
|
1. **Task Completes** → QA validation passes
|
|
2. **System Analyzes** → Extracts skills used, patterns, tools
|
|
3. **Learning Created** → Stores in knowledge graph with metadata
|
|
4. **Future Tasks** → System recommends relevant skills based on prompt
|
|
|
|
## Basic Usage
|
|
|
|
### Run QA with Learning Extraction
|
|
|
|
```bash
|
|
# Standard QA validation only
|
|
python3 lib/qa_validator.py --sync --verbose
|
|
|
|
# With learning extraction enabled
|
|
python3 lib/qa_validator.py --learn --sync --verbose
|
|
```
|
|
|
|
### Extract Learnings from Completed Task
|
|
|
|
```python
|
|
from lib.skill_learning_engine import SkillLearningSystem
|
|
|
|
system = SkillLearningSystem()
|
|
|
|
task_data = {
|
|
"task_id": "my_task",
|
|
"prompt": "Refactor authentication module",
|
|
"project": "overbits",
|
|
"status": "success",
|
|
"tools_used": ["Bash", "Read", "Edit"],
|
|
"duration": 45.2,
|
|
"result_summary": "Successfully refactored",
|
|
"qa_passed": True,
|
|
"timestamp": "2026-01-09T12:00:00"
|
|
}
|
|
|
|
qa_results = {
|
|
"passed": True,
|
|
"results": {"syntax": True, "routes": True},
|
|
"summary": {"errors": 0}
|
|
}
|
|
|
|
result = system.process_task_completion(task_data, qa_results)
|
|
print(f"Learning ID: {result['learning_id']}")
|
|
```
|
|
|
|
### Get Recommendations for a Task
|
|
|
|
```python
|
|
system = SkillLearningSystem()
|
|
|
|
recommendations = system.get_recommendations(
|
|
task_prompt="Debug authentication issue",
|
|
project="overbits"
|
|
)
|
|
|
|
for rec in recommendations:
|
|
print(f"{rec['skill']}: {rec['confidence']:.0%} confidence")
|
|
```
|
|
|
|
### View Learned Skills Profile
|
|
|
|
```python
|
|
profile = system.get_learning_summary()
|
|
|
|
print(f"Total learnings: {profile['total_learnings']}")
|
|
print(f"Top skills: {profile['top_skills']}")
|
|
```
|
|
|
|
## What Gets Learned
|
|
|
|
The system extracts and learns:
|
|
|
|
### Tool Usage
|
|
- Which tools are used for which tasks
|
|
- Tool frequency and patterns
|
|
- Tool combinations that work well together
|
|
|
|
### Decision Patterns
|
|
- **Optimization**: Performance improvement approaches
|
|
- **Debugging**: Error diagnosis and fixing strategies
|
|
- **Testing**: Validation and verification techniques
|
|
- **Refactoring**: Code improvement methods
|
|
- **Documentation**: Documentation practices
|
|
- **Integration**: System integration approaches
|
|
- **Automation**: Automation and scheduling patterns
|
|
|
|
### Project Knowledge
|
|
- Which projects benefit from which approaches
|
|
- Project-specific tool combinations
|
|
- Project patterns and best practices
|
|
|
|
### Quality Metrics
|
|
- Success rates by tool combination
|
|
- Task completion times
|
|
- QA pass rates by category
|
|
|
|
## Storage
|
|
|
|
All learnings stored in the **research knowledge graph**:
|
|
|
|
```
|
|
/etc/luz-knowledge/research.db
|
|
```
|
|
|
|
Query learnings:
|
|
```bash
|
|
python3 lib/knowledge_graph.py search "optimization"
|
|
python3 lib/knowledge_graph.py list research finding
|
|
```
|
|
|
|
## Examples
|
|
|
|
### Example 1: Learn from Database Optimization
|
|
|
|
```bash
|
|
# Task completes successfully with QA passing
|
|
python3 lib/qa_validator.py --learn --sync
|
|
|
|
# System automatically:
|
|
# - Identifies tools used: Bash, Read, Edit
|
|
# - Recognizes pattern: optimization
|
|
# - Stores learning about database optimization
|
|
# - Creates relations between tools and pattern
|
|
```
|
|
|
|
### Example 2: Get Recommendations
|
|
|
|
```python
|
|
# Later, for similar task:
|
|
recommendations = system.get_recommendations(
|
|
"Optimize API endpoint performance",
|
|
project="overbits"
|
|
)
|
|
|
|
# Might suggest:
|
|
# - Use Bash for performance analysis
|
|
# - Use Edit for code changes
|
|
# - Watch for optimization patterns
|
|
# - Similar to previous successful tasks
|
|
```
|
|
|
|
### Example 3: Build Team Knowledge
|
|
|
|
Run multiple tasks with learning enabled:
|
|
```bash
|
|
# Day 1: Deploy task with --learn
|
|
python3 lib/qa_validator.py --learn --sync
|
|
|
|
# Day 2: Optimization task with --learn
|
|
python3 lib/qa_validator.py --learn --sync
|
|
|
|
# Day 3: Similar deployment task
|
|
# System now has learnings from both previous tasks
|
|
recommendations = system.get_recommendations("Deploy new version")
|
|
```
|
|
|
|
## Statistics and Monitoring
|
|
|
|
View learning system statistics:
|
|
|
|
```bash
|
|
python3 lib/qa_learning_integration.py --stats
|
|
```
|
|
|
|
Output:
|
|
```
|
|
=== QA Learning Integration Statistics ===
|
|
|
|
total_events: 42
|
|
qa_passed: 40
|
|
learnings_extracted: 38
|
|
extraction_rate: 0.95
|
|
last_event: 2026-01-09T12:00:00
|
|
```
|
|
|
|
## Testing
|
|
|
|
Quick test of the system:
|
|
|
|
```bash
|
|
python3 lib/skill_learning_engine.py test
|
|
```
|
|
|
|
Full test suite:
|
|
```bash
|
|
python3 -m pytest tests/test_skill_learning.py -v
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### "No learnings extracted"
|
|
- Check that QA actually passed
|
|
- Verify knowledge graph is accessible
|
|
- Run with `--verbose` to see details
|
|
|
|
### "Empty recommendations"
|
|
- Need to complete tasks with `--learn` first
|
|
- Task prompt must match learning keywords
|
|
- Check knowledge graph has entries:
|
|
```bash
|
|
python3 lib/knowledge_graph.py list research finding
|
|
```
|
|
|
|
### "Permission denied"
|
|
- Check `/etc/luz-knowledge/` permissions
|
|
- Ensure user is in `ai-users` group
|
|
- Check knowledge graph domain permissions
|
|
|
|
## Next Steps
|
|
|
|
1. **Start collecting learnings**: Run tasks with `--learn`
|
|
2. **Monitor learnings**: Check statistics and knowledge graph
|
|
3. **Use recommendations**: Integrate into task routing
|
|
4. **Refine patterns**: Add custom extraction patterns as needed
|
|
|
|
## Learn More
|
|
|
|
- Full documentation: [SKILL_LEARNING_SYSTEM.md](./SKILL_LEARNING_SYSTEM.md)
|
|
- Source code: `lib/skill_learning_engine.py`
|
|
- Integration: `lib/qa_learning_integration.py`
|
|
- Tests: `tests/test_skill_learning.py`
|