Adaptive Stress Testing for Language Model Toxicity Podcast Por  arte de portada

Adaptive Stress Testing for Language Model Toxicity

Adaptive Stress Testing for Language Model Toxicity

Escúchala gratis

Ver detalles del espectáculo

Acerca de esta escucha

This episode explores ASTPrompter, a novel approach to automated red-teaming for large language models (LLMs). Unlike traditional methods that focus on simply triggering toxic outputs, ASTPrompter is designed to discover likely toxic prompts – those that could naturally emerge during regular language model use. The approach uses Adaptive Stress Testing (AST), a technique that identifies likely failure points, and reinforcement learning to train an "adversary" model. This adversary generates prompts that aim to elicit toxic responses from a "defender" model, but importantly, these prompts have a low perplexity, meaning they are realistic and likely to occur, unlike many prompts generated by other methods.

Todavía no hay opiniones