Melissa Bell, CEO of the parent company of the Chicago Sun-Times, which on May 18, 2025, published a special section filled with errors and falsehoods created by artificial intelligence. “Part of the journalistic process is a commitment to acknowledging mistakes,” she wrote later. Bell appeared at a journalism conference about a month before the section was published. ( International Journalism Festival/YouTube )

Chicago Sun-Times Learns Artificial Intelligence the Hard Way

A special feature section filled with made-up content offers lessons for all companies on how not to become a symbol of technology gone awry

When the chief executive of the parent of the Chicago Sun-Times was alerted that a special section contained made-up content, Melissa Bell thought it was “an AI-generated joke.”

Turns out, the joke was on Bell, and it’s one that threatens the credibility of the 77-year-old newspaper.

Readers had discovered that the newspaper published on Sunday, May 18, 2025, a 64-page guide on summertime activities that included a list of books filled with errors and fabrications.

Called “Heat Index,” the guide was written by a freelance writer working for a well-known content distributor, which in turn sold the section to the Sun-Times. The writer used an artificial intelligence app to generate the book list but didn’t check the results. The fiasco quickly became a national news story.

“Even though it wasn’t our actual work, the Sun-Times became the poster child of ‘What could go wrong with AI?’” Bell wrote in an apology published May 29.

Bell takes away several lessons from the episode. We’ll add three of our own so that other companies, no matter their industry, can reduce the chances of becoming another symbol of technology gone awry.

Bell has been CEO of Chicago Public Media since September 2024, a nonprofit that also operates the city’s public radio station. Public Media acquired the money-losing newspaper in January 2022.

The Sun-Times has long been the scrappy tabloid underdog to the larger Chicago Tribune. The Sun-Times has a paid Sunday circulation of 51,000. In October 2022, it dropped the paywall for its website, citing its nonprofit mission.

Bell, in her mid-40s, is a former digital strategist with The Washington Post. She cofounded the news site Vox in 2014 and quickly became publisher of Vox’s parent company, where she played a key role in its growth. She resigned from Vox Media in 2023.

CEOs and their communications advisors would do well to study Bell’s response to the crisis. On the day the newspaper learned of the problems, it issued a holding statement on social media, published a story by the editorial staff and a statement that carried Bell’s byline.

Nine days later, the editorial staff reported on additional errors in the special section. Bell also published her apology, which was remarkable for its transparency as well as its length at about 2,430 words.

Our three lessons from the debacle touch on crisis communications as well as artificial intelligence.

1. Have an AI policy. The news service Associated Press announced its AI policy in August 2023. Yet it was not until earlier this year that Public Media circulated a first draft of its own AI policy to its staff, Bell admitted in her apology. The policy has not been finalized.

About one-third of employers have no AI policy, according to a survey of 2,000 C-suite execs worldwide by HR services provider Adecco Group released in May.

We’ve suggested 12 questions to answer in crafting a generative AI policy, but disclosure and verification of the results are key. Having a policy should reduce the chances that AI fabrications are published but won’t eliminate them.

The Heat Index was purchased from King Features, which has “a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content,” King Features said in a statement to the Sun-Times.

The freelance writer who wrote the section did not disclose his use of AI, the company said.

Reached by telephone, C.J. Kettler, CEO of King Features, declined comment. A spokeswoman declined to answer written questions about the policy.

2. Apply the policy to freelancers. Even if the Sun-Times had an AI policy for staff, it wouldn’t have applied to the Heat Index, which was produced by a third party.

The special section was published by the newspaper’s circulation department, which struck a deal with King Features in a bid to save money. The arrangement began before Bell came on board. She says her quick approval of the deal is one of five human mistakes that contributed to the mess.

Was that the error? Companies of all sorts increasingly use independent contractors and outside firms such as public relations agencies to produce content.

The error was actually an assumption underlying the deal. The circulation department “trusted that work licensed from King Features would live up to a level of editorial rigor that matches the standards of Chicago Public Media,” Bell said in her apology.

It wasn’t an unreasonable expectation. King Features distributes comics and columnists as well as content. It is a unit of magazine publisher Hearst, with 2024 revenue of $13 billion.

But it’s crucial that the terms of AI usage are expressly stated on any contract.

Bell wasn’t available for an interview. A spokesman said in an email:

“Going forward, we’ve changed our editorial policy to ensure that any third-party licensed content 1) clearly states where it comes from, 2) is not presented as if it were created by our newsrooms, and 3) is reviewed by our new Standards team with editors from our newsrooms. Just as we expect freelancers and editorial partners to adhere to ethical journalism practices, the same will apply to all licensed content providers.”

3. Keep one eye on social media. The first hint of a crisis often surfaces on social media.

At dinnertime on Monday, May 19, Tina TBR, a book blogger, posted a photo of the Heat Index’s reading list to Instagram, noting that many of the titles were fake. The Sun-Times missed it.

On Tuesday at 6:04 a.m. Central Time, one of Tina TBR’s followers posted to Bluesky a photo of the reading list, crediting Tina TBR and asking, “What are we coming to?” That post quickly generated hundreds of comments.

At 8:44 a.m., a Public Media executive emailed Bell, writing: “Sounds like some of the content on the purchased summer guide from Hearst was potentially made up.”

Bell thought the photo of the reading list was an AI-generated joke, she said in her apology.

It wasn’t until 9:19 a.m. that the Sun-Times posted a holding statement on Bluesky, defending the newspaper’s editorial department and saying the paper was looking into the problem. The original tipster, the mild-mannered Tina TBR, called the response “lukewarm.”

A reporter from technology news site 404 Media contacted the Sun-Times, but the newspaper didn’t respond. He posted his story at 10:46 a.m.: “Chicago Sun-Times Prints AI-Generated Summer Reading List with Books that Don’t Exist.”

Later that day, the Sun-Times published its own story and a statement by Bell.

Not every negative social media post requires a response, but every post requires an evaluation of whether a response is needed based on guidelines. In the hours after that first post, the Sun-Times was outworked by the flock on social media and the 404 Media reporter.

What’s ahead?
In her apology, Bell is adamant about the future of artificial intelligence at the newspaper and its sibling radio station.

“Chicago Public Media will not back away from experimenting and learning how to properly use AI,” she wrote in her apology. “We will not be using AI agents to write our stories, but we will work to find ways to use AI technology to help our work and serve our audiences.”

A humbler tone would have been appropriate, especially since Bell’s apology contained an error that required a correction. (She got wrong the comics the Sun-Times buys from King Features.)

For a different perspective, look to Zach Seward, the editorial director of AI initiatives at The New York Times.

 Artificial intelligence is “a powerful tool when combined with traditional reporting and coding expertise,” he wrote last month in the Columbia Journalism Review. “But that, I think, explains the disconnect. AI on its own is a parlor trick. Like all software, it’s useful when paired with properly structured data and someone who knows what they’re doing.”

Tom Corfman is a senior consultant with Ragan Consulting Group who says: “To err is human. Unless it was AI.”

RCG can help you organize your communications team to become effective news producers by reviewing your operation and helping create editorial guidelines, including the use of artificial intelligence.

Contact our client team to learn more about how we can help you with your communications. Follow RCG on LinkedIn and subscribe to our weekly newsletter here.

Similar Posts