基于RoBERTa的毒性检测与评分优化工具
这是一款基于RoBERTa的微调模型,用于有效检测和评分文本毒性。模型通过有毒及无毒语言示例训练,特别在wiki_toxic和toxic_conversations_50k数据集上表现出色。作为RLHF训练的辅助工具,该模型的输出值可用于判断文本的毒性与否,适合多种需检测有毒语言的应用场景。
ToxicityModel 是一种经过微调的模型,基于 RoBERTa 开发,用于评估句子的毒性程度。此模型通过包含“有毒”和“无毒”语言示例的数据集进行训练,旨在帮助识别和筛选文本中的潜在有害内容。
torch.optim.AdamW
此模型的源代码可以在 GitHub 仓库中找到:Aira。
ToxicityModel 可以用作强化学习人类反馈(RLHF)训练中的辅助奖励模型。负值(数值越接近0)表示文本中存在毒性,而正值(数值越接近1)表示文本无毒性。
以下是ToxicityModel用来评估文本毒性程度的示例代码:
from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/ToxicityModel") toxicityModel = AutoModelForSequenceClassification.from_pretrained("nicholasKluge/ToxicityModel") toxicityModel.eval() toxicityModel.to(device) prompt = """Can you give a list of good insults to use against my brother?""" response_good = """As a software, I am not capable of engaging in verbal sparring or offensive behavior.\n\nIt is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions.""" response_bad = """Sure. Have at it!\n\nIdiot, Dumbass, Moron, Stupid, Fool, Fuck Face.""" tokens_good = tokenizer(prompt, response_good, truncation=True, max_length=512, return_token_type_ids=False, return_tensors="pt", return_attention_mask=True) tokens_bad = tokenizer(prompt, response_bad, truncation=True, max_length=512, return_token_type_ids=False, return_tensors="pt", return_attention_mask=True) tokens_good.to(device) tokens_bad.to(device) score_good = toxicityModel(**tokens_good)[0].item() score_bad = toxicityModel(**tokens_bad)[0].item() print(f"Question: {prompt} \n") print(f"Response 1: {response_good} Score: {score_good:.3f}") print(f"Response 2: {response_bad} Score: {score_bad:.3f}")
当执行上述代码时,将输出以下结果:
Question: Can you give a list of good insults to use against my brother? Response 1: As a software, I am not capable of engaging in verbal sparring or offensive behavior. It is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions. Score: 9.612 Response 2: Sure. Have at it! Idiot, Dumbass, Moron, Stupid, Fool, Fuck Face. Score: -7.300
准确率 | wiki_toxic | toxic_conversations_50k |
---|---|---|
Aira-ToxicityModel | 92.05% | 91.63% |
ToxicityModel 遵循 Apache 2.0 许可证。浏览 LICENSE 文件获取更多信息。