KnowBe4 - Researchers uncover surprising method to hack the guardrails of LLMs
Researchers from Carnegie Mellon University and the Center for A.I. Safety have discovered a new prompt injection method to override the guardrails of large language models (LLMs). These guardrails are safety measures designed to prevent AI from generating harmful content.

from KnowBe4 Security Awareness Training Blog https://blog.knowbe4.com/researchers-uncover-surprising-method-to-hack-the-guardrails-of-llms
Comments
Post a Comment