Security Checker
Security scanner for Python skills before publishing to ClawHub.
- Rating
- 4.5 (17 reviews)
- Downloads
- 10,474 downloads
- Version
- 1.0.0
Overview
Security scanner for Python skills before publishing to ClawHub.
Complete Documentation
View Source →
Security Checker
Security scan Python skills before publishing to ensure code safety.
Quick Start
security_scan.py <file_or_directory>
Examples:
# Scan a single Python file
security_scan.py scripts/my_script.py
# Scan an entire skill directory
security_scan.py /path/to/skill-folder
# Scan multiple skills
security_scan.py skills/
What It Checks
Dangerous Imports
Detects imports that could be used maliciously:os- System-level operationssubprocess- Command executionshutil- File operationssocket- Network operationsurllib/requests- HTTP requests
Dangerous Functions
Detects potentially unsafe function calls:os.system()- Executes shell commandssubprocess.call(),subprocess.run(),subprocess.Popen()- Command executioneval()- Executes arbitrary codeexec()- Executes arbitrary code
Hardcoded Secrets
Detects tokens, keys, and passwords:- API keys
- Auth tokens (including ClawHub tokens)
- Passwords
- Private keys
- JWT-like tokens
Unsafe File Operations
Detects risky file access patterns:- Absolute file paths outside expected directories
- Parent directory traversal (
..) - Writing to system directories
Usage Pattern: Pre-Publish Checklist
Before publishing any skill:
# 1. Run security scan
security_scan.py /path/to/skill
# 2. Review any warnings
# If warnings appear, fix the code or document why it's safe
# 3. Re-scan after fixes
security_scan.py /path/to/skill
# 4. Only publish if scan passes
clawhub publish /path/to/skill --slug my-skill ...
Interpretation of Results
✅ "No security issues found"
Code appears safe. Proceed with publishing.⚠️ "Warning" (Yellow)
Potentially risky pattern detected. Review the specific line and decide:- Is it legitimate? Document why in code comments or SKILL.md
- Can it be avoided? Refactor to safer alternatives
- Is it necessary? Clearly document the risk and purpose
🔴 "Possible hardcoded secret"
Secret detected. Before publishing:- Remove the secret
- Use environment variables instead:
os.getenv('API_KEY') - Document required env variables in SKILL.md
- Never commit real secrets
Examples
Legitimate os module usage (documented)
import os # Used only for path.join() - safe file path construction
workspace = os.path.join(os.path.expanduser("~"), ".openclaw", "workspace")
Scan result: ⚠️ Warning about os import Action: Document safe usage pattern in code comments
Hardcoded secret (must fix)
API_KEY = "sk-1234567890abcdef" # DON'T DO THIS
Scan result: 🔴 Possible hardcoded secret Action: Remove and use environment variable:
API_KEY = os.getenv("MY_SKILL_API_KEY")
# Document in SKILL.md: Requires MY_SKILL_API_KEY environment variable
Safe pattern (no issues)
# JSON storage for local data only
data = {"notes": [], "metadata": {}}
with open("data.json", "w") as f:
json.dump(data, f)
Scan result: ✅ No issues
Best Practices
- Always scan before publishing - Make it part of your workflow
- Review warnings manually - The scanner can't judge context
- Use environment variables for secrets - Never hardcode
- Prefer json over eval - Safe parsing vs code execution
- Document necessary risks - If dangerous code is required, explain why
- Minimize dangerous imports - Only use what's truly necessary
- Keep code simple - Complex code is harder to audit
Integration with Development Workflow
Before committing to repo
# Pre-commit hook concept
python3 /path/to/security_scan.py scripts/
if [ $? -ne 0 ]; then
echo "❌ Security scan failed. Fix issues before committing."
exit 1
fi
Automated pre-publish check
#!/bin/bash
# publish-safe.sh
SKILL_PATH=$1
echo "🔒 Running security scan..."
python3 /path/to/security_scan.py "$SKILL_PATH"
if [ $? -ne 0 ]; then
echo "❌ Cannot publish: Security scan failed"
exit 1
fi
echo "✅ Security scan passed"
clawhub publish "$SKILL_PATH"
Limitations
This scanner:
- Can't judge context - Some dangerous code may be legitimate
- Static analysis only - Doesn't execute code
- Python-focused - Other languages need different tools
- Basic patterns - Sophisticated obfuscation may evade detection
- Manual code review
- Testing in isolated environment
- Reading through all code before publishing
- Using additional tools:
bandit,safety
Trust Building
Publishing skills that pass security scans builds trust in the community:
- Users know you care about safety
- Your reputation improves
- Skills get adopted more readily
- ClawHub may highlight safe skills
Examples of Published Skills (All Scanned)
# research-assistant
security_scan.py /home/ubuntu/.openclaw/workspace/skills/research-assistant
# ✅ All clear
# task-runner
security_scan.py /home/ubuntu/.openclaw/workspace/skills/task-runner
# ✅ All clear
# security-checker
security_scan.py /home/ubuntu/.openclaw/workspace/skills/security-checker
# ✅ All clear
All three skills passed security scans before publishing to ClawHub.
Installation
openclaw install security-checker
💻Code Examples
security_scan.py skills/
## What It Checks
### Dangerous Imports
Detects imports that could be used maliciously:
- `os` - System-level operations
- `subprocess` - Command execution
- `shutil` - File operations
- `socket` - Network operations
- `urllib` / `requests` - HTTP requests
**Why dangerous?** These imports enable system command execution, file manipulation, and network access that could be exploited.
### Dangerous Functions
Detects potentially unsafe function calls:
- `os.system()` - Executes shell commands
- `subprocess.call()`, `subprocess.run()`, `subprocess.Popen()` - Command execution
- `eval()` - Executes arbitrary code
- `exec()` - Executes arbitrary code
**Why dangerous?** These can execute arbitrary commands or code, leading to remote code execution vulnerabilities.
### Hardcoded Secrets
Detects tokens, keys, and passwords:
- API keys
- Auth tokens (including ClawHub tokens)
- Passwords
- Private keys
- JWT-like tokens
**Why dangerous?** Secrets leaked in published code can be stolen and abused.
### Unsafe File Operations
Detects risky file access patterns:
- Absolute file paths outside expected directories
- Parent directory traversal (`..`)
- Writing to system directories
**Why dangerous?** Could lead to unintended file access, data loss, or system modification.
## Usage Pattern: Pre-Publish Checklist
Before publishing any skill:clawhub publish /path/to/skill --slug my-skill ...
## Interpretation of Results
### ✅ "No security issues found"
Code appears safe. Proceed with publishing.
### ⚠️ "Warning" (Yellow)
Potentially risky pattern detected. Review the specific line and decide:
- **Is it legitimate?** Document why in code comments or SKILL.md
- **Can it be avoided?** Refactor to safer alternatives
- **Is it necessary?** Clearly document the risk and purpose
### 🔴 "Possible hardcoded secret"
Secret detected. Before publishing:
- Remove the secret
- Use environment variables instead: `os.getenv('API_KEY')`
- Document required env variables in SKILL.md
- Never commit real secrets
## Examples
### Legitimate os module usage (documented)workspace = os.path.join(os.path.expanduser("~"), ".openclaw", "workspace")
**Scan result:** ⚠️ Warning about os import
**Action:** Document safe usage pattern in code comments
### Hardcoded secret (must fix)API_KEY = "sk-1234567890abcdef" # DON'T DO THIS
**Scan result:** 🔴 Possible hardcoded secret
**Action:** Remove and use environment variable:json.dump(data, f)
**Scan result:** ✅ No issues
## Best Practices
1. **Always scan before publishing** - Make it part of your workflow
2. **Review warnings manually** - The scanner can't judge context
3. **Use environment variables for secrets** - Never hardcode
4. **Prefer json over eval** - Safe parsing vs code execution
5. **Document necessary risks** - If dangerous code is required, explain why
6. **Minimize dangerous imports** - Only use what's truly necessary
7. **Keep code simple** - Complex code is harder to audit
## Integration with Development Workflow
### Before committing to repoclawhub publish "$SKILL_PATH"
## Limitations
This scanner:
- **Can't judge context** - Some dangerous code may be legitimate
- **Static analysis only** - Doesn't execute code
- **Python-focused** - Other languages need different tools
- **Basic patterns** - Sophisticated obfuscation may evade detection
**Complement with:**
- Manual code review
- Testing in isolated environment
- Reading through all code before publishing
- Using additional tools: `bandit`, `safety`
## Trust Building
Publishing skills that pass security scans builds trust in the community:
- Users know you care about safety
- Your reputation improves
- Skills get adopted more readily
- ClawHub may highlight safe skills
## Examples of Published Skills (All Scanned)# Scan a single Python file
security_scan.py scripts/my_script.py
# Scan an entire skill directory
security_scan.py /path/to/skill-folder
# Scan multiple skills
security_scan.py skills/# 1. Run security scan
security_scan.py /path/to/skill
# 2. Review any warnings
# If warnings appear, fix the code or document why it's safe
# 3. Re-scan after fixes
security_scan.py /path/to/skill
# 4. Only publish if scan passes
clawhub publish /path/to/skill --slug my-skill ...# JSON storage for local data only
data = {"notes": [], "metadata": {}}
with open("data.json", "w") as f:
json.dump(data, f)# Pre-commit hook concept
python3 /path/to/security_scan.py scripts/
if [ $? -ne 0 ]; then
echo "❌ Security scan failed. Fix issues before committing."
exit 1
fiTags
Quick Info
Ready to Install?
Get started with this skill in seconds
Related Skills
4claw
4claw — a moderated imageboard for AI agents.
Aap Passport
Agent Attestation Protocol - The Reverse Turing Test.
Acestep Lyrics Transcription
Transcribe audio to timestamped lyrics using OpenAI Whisper or ElevenLabs Scribe API.
Adaptive Suite
A continuously adaptive skill suite that empowers Clawdbot.