When I joined the Guardian, I made my first blunder with my debut piece, referring to a theoretical physicist as a theoretical physician. This misstep, immortalized online, still makes me cringe a decade later.
Recently, I found some consolation observing the blunders made by Apple’s artificial intelligence system. In one of its automated news summaries sent to iPhones, the AI mistakenly reported that Luigi Mangione, accused of murdering UnitedHealthcare CEO Brian Thompson, had committed suicide. It wrongly claimed that Benjamin Netanyahu had been arrested, insinuated that Pete Hegseth, supposedly named as Donald Trump’s defense secretary, had resigned, and added the erroneous detail that Rafael Nadal was gay. None of these were true.
Apple Intelligence, rolled out to UK iPhones in December, was responsible for these embarrassments. Publishers had no option to opt out, and a tiny arrow next to the news source’s logo was the sole indicator that these were outputs from the AI system, not direct excerpts from the publishers themselves. The BBC’s intense reporting on this issue prompted an unusual move from Apple — they hit pause on the service. (Full disclosure: my spouse works for Apple.)
Apple’s decision makes sense; the blunders were bizarre enough to catch public attention. Incorrectly stating, as one summary did, that Luke Littler had won the darts world championship before he even played, is like a story with an extra limb: it seems plausible until you spot the absurd error. You can only imagine what chaos might have unfolded if this system had been active during the attempted assassination news involving Donald Trump last summer.
This whole saga highlights a clash between two distinct cultural approaches: one that embraces public experimentation, knowing that missteps are part of the process, and another that risks invalidating its very purpose if it consistently errs. There’s no straightforward way to bridge this divide, but a dedicated journalistic exception could have been a practical solution.
Apple’s AI doesn’t “understand” facts—it mimics them. More concerning is how easily such mistakes could have gone unnoticed, given they were generated locally on user devices, and Apple has no way of tracing them. Without the alerts being picked up by journalists at the BBC, ProPublica, and the Washington Post, these errors might have stayed hidden. It’s likely that subscribers received incorrect notifications and mistakenly blamed journalists for these inaccuracies or accepted them as truth. Some iPhone users might still believe in whimsical notions like Santa Claus delivering the king’s speech.
The BBC, among other agencies, identified the issue, but it took the lead in pushing back against Apple. This assertive coverage wasn’t just about clearing its own name but also about safeguarding its reputation of trustworthiness. As one BBC journalist put it, “It was more than distancing from the errors; it was about telling Apple: this looks like it’s from us, and our credibility is at stake.”
While some might scoff, noting that news organizations often make their own mistakes, those tend to be distortions of reality, not entirely fabricated scenarios. If an outlet repeatedly gets major stories wrong, its reputation suffers, and its audience will likely turn elsewhere, with critics publishing exposés of the blunders.
Journalists increasingly use AI in story production, an undeniably powerful tool for data analysis and investigative journalism. There’s debate about how much transparency is required in these cases: some argue it’s akin to disclosing spreadsheet use, while others insist AI’s novel presence warrants caution. Ultimately, the issue isn’t what you disclose to your audience but how closely you monitor AI outputs.
Apple’s response is revealing. Known for its defensive stance, Apple eventually relented, though it was silent on what level of accuracy will be deemed acceptable in the future.
The intriguing question is why Apple’s AI involvement in these summaries was so low-profile—surely two letters, ‘AI,’ would suffice as a clear identifier. While a minor detail, it hints at a broader issue: introducing transformative tech in ways users don’t fully grasp can complicate Apple’s mission for simplicity.
Despite being less synonymous with AI than its Silicon Valley peers, Apple is still part of the technological gold rush. The AI tools that endure might be like the Qwerty keyboard or Internet Explorer—imperfect but dominant due to sheer reach. A seamless user experience might be Apple’s ace in the hole.
Even if it means friction with prestigious news outlets, Apple might absorb the fallout. Just last week, with the new software update that halted news summaries, Apple made AI features default.