Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
I want to pentest an LLM in order to prove that you shouldn't entrust troubleshooting to a pretend "AI". Do we have a good series of prompts known to break these things?
@action_jay https://github.com/NVIDIA/garak
https://media.tacobelllabs.net/media_attachments/files/113/923/697/738/461/126/original/9a60b6323603f39f.mp4