Systematically testing an AI system by trying to make it produce harmful, biased, or incorrect outputs.
Systematically testing an AI system by trying to make it produce harmful, biased, or incorrect outputs. Red teams attempt to jailbreak models, find safety gaps, and identify failure modes. A critical part of responsible AI deployment. Companies like Anthropic and OpenAI run extensive red team programs.
The broad field studying how to build AI systems that are safe, reliable, and beneficial.
Safety measures built into AI systems to prevent harmful, inappropriate, or off-topic outputs.
A technique for bypassing an AI model's safety restrictions and guardrails.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.