Subscribe now

Technology

AI models fall for the same scams that we do

Large language models can be used to scam humans, but AI is also susceptible to being scammed – and some models are more gullible than others

By Chris Stokel-Walker

25 October 2024

New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

Scams can fool AI models

Wong Yu Liang/Getty Images

The large language models (LLMs) that power chatbots are increasingly being used in attempts to scam humans – but they are susceptible to being scammed themselves.

Udari Madhushani Sehwag at JP Morgan AI Research and her colleagues peppered three models behind popular chatbots – OpenAI’s GPT-3.5 and GPT-4, as well as Meta’s Llama 2 – with 37 scam scenarios.

The chatbots were told, for instance, that they had received an email recommending investing in a new cryptocurrency, with…

Article amended on 28 October 2024

We clarified which models were compared in the jailbreak evaluation

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up

To continue reading, subscribe today with our introductory offers

Unlock this article

No commitment, cancel anytime*

Offer ends 15 January 2025.

*Cancel anytime within 14 days of payment to receive a refund on unserved issues.

Inclusive of applicable taxes (VAT)

or

Existing subscribers

Sign in to your account