Skip to content

Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code

  • by

​Meta on Tuesday announced LlamaFirewall, an open-source framework designed to secure artificial intelligence (AI) systems against emerging cyber risks such as prompt injection, jailbreaks, and insecure code, among others.
The framework, the company said, incorporates three guardrails, including PromptGuard 2, Agent Alignment Checks, and CodeShield.
PromptGuard 2 is designed to detect direct Meta on Tuesday announced LlamaFirewall, an open-source framework designed to secure artificial intelligence (AI) systems against emerging cyber risks such as prompt injection, jailbreaks, and insecure code, among others.
The framework, the company said, incorporates three guardrails, including PromptGuard 2, Agent Alignment Checks, and CodeShield.
PromptGuard 2 is designed to detect direct  The Hacker News

Leave a Reply

Your email address will not be published. Required fields are marked *