✓ Verified
📡 Monitoring
✓ Enhanced Data
Glin Profanity
Profanity detection and content moderation library.
- Rating
- 4.9 (286 reviews)
- Downloads
- 817 downloads
- Version
- 1.0.0
Overview
Profanity detection and content moderation library.
Complete Documentation
View Source →
Glin Profanity - Content Moderation Library
Profanity detection library that catches evasion attempts like leetspeak (f4ck, sh1t), Unicode tricks (Cyrillic lookalikes), and obfuscated text.
Installation
bash
# JavaScript/TypeScript
npm install glin-profanity
# Python
pip install glin-profanity
Quick Usage
JavaScript/TypeScript
javascript
import { checkProfanity, Filter } from 'glin-profanity';
// Simple check
const result = checkProfanity("Your text here", {
detectLeetspeak: true,
normalizeUnicode: true,
languages: ['english']
});
result.containsProfanity // boolean
result.profaneWords // array of detected words
result.processedText // censored version
// With Filter instance
const filter = new Filter({
replaceWith: '***',
detectLeetspeak: true,
normalizeUnicode: true
});
filter.isProfane("text") // boolean
filter.checkProfanity("text") // full result object
Python
python
from glin_profanity import Filter
filter = Filter({
"languages": ["english"],
"replace_with": "***",
"detect_leetspeak": True
})
filter.is_profane("text") # True/False
filter.check_profanity("text") # Full result dict
React Hook
tsx
import { useProfanityChecker } from 'glin-profanity';
function ChatInput() {
const { result, checkText } = useProfanityChecker({
detectLeetspeak: true
});
return (
<input onChange={(e) => checkText(e.target.value)} />
);
}
Key Features
| Feature | Description |
|---|---|
| Leetspeak detection | f4ck, sh1t, @$$ patterns |
| Unicode normalization | Cyrillic fսck → fuck |
| 24 languages | Including Arabic, Chinese, Russian, Hindi |
| Context whitelists | Medical, gaming, technical domains |
| ML integration | Optional TensorFlow.js toxicity detection |
| Result caching | LRU cache for performance |
Configuration Options
javascript
const filter = new Filter({
languages: ['english', 'spanish'], // Languages to check
detectLeetspeak: true, // Catch f4ck, sh1t
leetspeakLevel: 'moderate', // basic | moderate | aggressive
normalizeUnicode: true, // Catch Unicode tricks
replaceWith: '*', // Replacement character
preserveFirstLetter: false, // f*** vs ****
customWords: ['badword'], // Add custom words
ignoreWords: ['hell'], // Whitelist words
cacheSize: 1000 // LRU cache entries
});
Context-Aware Analysis
javascript
import { analyzeContext } from 'glin-profanity';
const result = analyzeContext("The patient has a breast tumor", {
domain: 'medical', // medical | gaming | technical | educational
contextWindow: 3, // Words around match to consider
confidenceThreshold: 0.7 // Minimum confidence to flag
});
Batch Processing
javascript
import { batchCheck } from 'glin-profanity';
const results = batchCheck([
"Comment 1",
"Comment 2",
"Comment 3"
], { returnOnlyFlagged: true });
ML-Powered Detection (Optional)
javascript
import { loadToxicityModel, checkToxicity } from 'glin-profanity/ml';
await loadToxicityModel({ threshold: 0.9 });
const result = await checkToxicity("You're the worst");
// { toxic: true, categories: { toxicity: 0.92, insult: 0.87 } }
Common Patterns
Chat/Comment Moderation
javascript
const filter = new Filter({
detectLeetspeak: true,
normalizeUnicode: true,
languages: ['english']
});
bot.on('message', (msg) => {
if (filter.isProfane(msg.text)) {
deleteMessage(msg);
warnUser(msg.author);
}
});
Content Validation Before Publish
javascript
const result = filter.checkProfanity(userContent);
if (result.containsProfanity) {
return {
valid: false,
issues: result.profaneWords,
suggestion: result.processedText // Censored version
};
}
Resources
- Docs: https://www.typeweaver.com/docs/glin-profanity
- Demo: https://www.glincker.com/tools/glin-profanity
- GitHub: https://github.com/GLINCKER/glin-profanity
- npm: https://www.npmjs.com/package/glin-profanity
- PyPI: https://pypi.org/project/glin-profanity/
Installation
Terminal bash
openclaw install glin-profanity
Copied!
💻Code Examples
pip install glin-profanity
pip-install-glin-profanity.txt
## Quick Usage
### JavaScript/TypeScript}
.txt
## Key Features
| Feature | Description |
|---------|-------------|
| Leetspeak detection | `f4ck`, `sh1t`, `@$$` patterns |
| Unicode normalization | Cyrillic `fսck` → `fuck` |
| 24 languages | Including Arabic, Chinese, Russian, Hindi |
| Context whitelists | Medical, gaming, technical domains |
| ML integration | Optional TensorFlow.js toxicity detection |
| Result caching | LRU cache for performance |
## Configuration Options// { toxic: true, categories: { toxicity: 0.92, insult: 0.87 } }
--toxic-true-categories--toxicity-092-insult-087--.txt
## Common Patterns
### Chat/Comment Moderationexample.sh
# JavaScript/TypeScript
npm install glin-profanity
# Python
pip install glin-profanityexample.js
import { checkProfanity, Filter } from 'glin-profanity';
// Simple check
const result = checkProfanity("Your text here", {
detectLeetspeak: true,
normalizeUnicode: true,
languages: ['english']
});
result.containsProfanity // boolean
result.profaneWords // array of detected words
result.processedText // censored version
// With Filter instance
const filter = new Filter({
replaceWith: '***',
detectLeetspeak: true,
normalizeUnicode: true
});
filter.isProfane("text") // boolean
filter.checkProfanity("text") // full result objectexample.py
from glin_profanity import Filter
filter = Filter({
"languages": ["english"],
"replace_with": "***",
"detect_leetspeak": True
})
filter.is_profane("text") # True/False
filter.check_profanity("text") # Full result dictexample.txt
import { useProfanityChecker } from 'glin-profanity';
function ChatInput() {
const { result, checkText } = useProfanityChecker({
detectLeetspeak: true
});
return (
<input onChange={(e) => checkText(e.target.value)} />
);
}example.js
const filter = new Filter({
languages: ['english', 'spanish'], // Languages to check
detectLeetspeak: true, // Catch f4ck, sh1t
leetspeakLevel: 'moderate', // basic | moderate | aggressive
normalizeUnicode: true, // Catch Unicode tricks
replaceWith: '*', // Replacement character
preserveFirstLetter: false, // f*** vs ****
customWords: ['badword'], // Add custom words
ignoreWords: ['hell'], // Whitelist words
cacheSize: 1000 // LRU cache entries
});example.js
import { analyzeContext } from 'glin-profanity';
const result = analyzeContext("The patient has a breast tumor", {
domain: 'medical', // medical | gaming | technical | educational
contextWindow: 3, // Words around match to consider
confidenceThreshold: 0.7 // Minimum confidence to flag
});example.js
import { batchCheck } from 'glin-profanity';
const results = batchCheck([
"Comment 1",
"Comment 2",
"Comment 3"
], { returnOnlyFlagged: true });Tags
#security_and-passwords
Quick Info
Category Monitoring
Model Claude 3.5
Complexity One-Click
Author thegdsks
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
Ready to Install?
Get started with this skill in seconds
openclaw install glin-profanity
Related Skills
✓ Verified
💻 Development
4claw
4claw — a moderated imageboard for AI agents.
🧠 Claude-Ready
)}
★ 4.4 (118)
↓ 4,990
v1.0.0
✓ Verified
💻 Development
Aap Passport
Agent Attestation Protocol - The Reverse Turing Test.
🧠 Claude-Ready
)}
★ 4.3 (89)
↓ 4,621
v1.0.0
✓ Verified
💻 Development
Adaptive Suite
A continuously adaptive skill suite that empowers Clawdbot.
🧠 Claude-Ready
)}
★ 4.7 (88)
↓ 1,625
v1.0.0
✓ Verified
💻 Development
Adversarial Prompting
Adversarial analysis to critique, fix.
🧠 Claude-Ready
)}
★ 4.6 (372)
↓ 28,222
v1.0.0