KnowBe4 - Researchers uncover surprising method to hack the guardrails of LLMs

Researchers from Carnegie Mellon University and the Center for A.I. Safety have discovered a new prompt injection method to override the guardrails of large language models (LLMs). These guardrails are safety measures designed to prevent AI from generating harmful content.



from KnowBe4 Security Awareness Training Blog https://blog.knowbe4.com/researchers-uncover-surprising-method-to-hack-the-guardrails-of-llms

Comments

Popular posts from this blog

KnowBe4 - Scam Of The Week: "When Users Add Their Names to a Wall of Shame"

Krebs - NY Charges First American Financial for Massive Data Leak

SBS CyberSecurity - In The Wild 166