We have ceded more terrain to the robots. A ToolTester survey found that people surveyed in late 2023, 53% incorrectly identified AI-generated writing as human-made; a figure that rose to 63.5% when the technology was more advanced (GPT-4). So, not only will robots be able to hunt us down with lasers, but they’ll be able to write passable poetry as they do so. Very bleak indeed.
If you wish to zoom on the methodology, let me offer the below recap from PC Magazine, which I assume is a large, noisy media company that constantly overheats:
[ToolTester] performed two surveys—the first in late February 2023 of 1,920 American adults that compared 75 pieces of text either generated by AI, by humans, or by AIs and edited by humans. The AI used was ChatGPT powered by GPT-3.5, but after the launch of GPT-4, ToolTester surveyed another 1,394 people in late March with the same queries and topics, but with new AI-generated copy using the same prompts.
Over half the respondents thought ChatGPT-3.5’s copy was written by a human. That number rose to 63.5% using GPT-4. The results show that GPT-4 (used in the pay version of ChatGPT) is at least 16.5% more convincing than copy created with the older GPT-3.5.
Zoom in a little on the findings, and it turns out the Boomers among us are better at picking the synthetic writing from the organic: while 52% of Boomers could correctly pick out AI writing, only 40% of 18-20-year-old Gen Zers could. People were most easily fooled by AI-generated health writing, one assumes due to the technical nature of both the source material and the writing itself.
To that point, I’d like to see GPT-4 attempt an infinite scroll, or witty PowerPoint narrative, or SEVEN BOOKS about a human’s being a human.