A technique for bypassing an AI model's safety restrictions and guardrails.
A technique for bypassing an AI model's safety restrictions and guardrails. Jailbreaks use clever prompting to trick the model into generating content it would normally refuse. An ongoing cat-and-mouse game between attackers and AI safety teams. New jailbreaks are found and patched continuously.
Systematically testing an AI system by trying to make it produce harmful, biased, or incorrect outputs.
Safety measures built into AI systems to prevent harmful, inappropriate, or off-topic outputs.
The broad field studying how to build AI systems that are safe, reliable, and beneficial.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.