Writing backwards can trick an AI into providing a bomb recipe

Writing backwards can trick an AI into providing a bomb recipe

Writing backwards can trick an AI into providing a bomb recipe

ChatGPT can be tricked with the right prompt

trickyaamir/Shutterstock

State-of-the-art generative AI models like ChatGPT can be tricked into giving instructions on how to make a bomb by simply writing the request in reverse, warn researchers.

Large language models (LLMs) like ChatGPT are trained on vast swathes of data from the internet and can create a range of outputs – some of which their makers would prefer didn’t spill out again. Unshackled, they are equally likely to be able to provide a decent cake recipe as know how to make explosives from household chemicals.

View Source Here

Science

Products You May Like

Articles You May Like

‘Sister Wives’ Fans Are Baffled By Next Week’s Preview
You Can Use Gemini to Build Google Home Automations, but Should You?
You Can Get This Lenovo IdeaPad 5i for $580 Right Now
My Favorite Amazon Deal of the Day: The Amazon Fire Max 11 Tablet
‘Today’ Hoda Kotb Reveals Why She Stalks Her Own House