Artificial intelligence can seem more human than actual humans on social media, study finds

PsyPost has revealed that OpenAI’s GPT-3 can both inform and disinform more effectively than real people on social media, according to a study published in Science Advances. The researchers, including Federico Germani, a researcher at the Institute of Biomedical Ethics and History of Medicine, focused on 11 topics prone to disinformation, such as climate change, vaccine safety, and COVID-19. They generated synthetic tweets using GPT-3 and collected real tweets from Twitter on the same topics.

The study found that people were better at recognizing disinformation in tweets written by real users compared to those generated by GPT-3. However, when GPT-3 produced accurate information, people were more likely to identify it as true compared to accurate information written by real users. Germani noted, “One noteworthy finding was that disinformation generated by AI was more convincing than that produced by humans.”

The study also revealed that people had a hard time distinguishing between tweets written by real users and those generated by GPT-3. Germani said,

This suggests that AI can convince you of being a real person more than a real person can convince you of being a real person.

Written by Jack Ryan-Phillips

As the Web Marketing Officer at the Defence Data Research Centre (DDRC), Jack leverages his expertise in website design and communications to facilitate the effective dissemination of academic and research content. In his role, he works collaboratively with DDRC's academic and research staff, ensuring that their ideas are effectively communicated through various digital platforms, including the website, Facebook, Twitter, and LinkedIn.

DeepMind: Exploring institutions for global AI governance

Stolen Microsoft key offered widespread access to Microsoft cloud services