Tolerance.ca
Director / Editor: Victor Teboul, Ph.D.
Looking inside ourselves and out at the world
Independent and neutral with regard to all political and religious orientations, Tolerance.ca® aims to promote awareness of the major democratic principles on which tolerance is based.

How we tricked AI chatbots into creating misinformation, despite ‘safety’ measures

By Lin Tian, Research Fellow, Data Science Institute, University of Technology Sydney
Marian-Andrei Rizoiu, Associate Professor in Behavioral Data Science, University of Technology Sydney
When you ask ChatGPT or other AI assistants to help create misinformation, they typically refuse, with responses like “I cannot assist with creating false information.” But our tests show these safety measures are surprisingly shallow – often just a few words deep – making them alarmingly easy to circumvent.

We have been investigating how AI language models can be manipulated to generate coordinated disinformation campaigns across social media platforms. What we found should concern anyone worried about the integrity of online information.

The shallow safety problem


Read complete article

© The Conversation -
Subscribe to Tolerance.ca


Follow us on ...
Facebook Twitter