I’ve Tried to Embrace AI. Here’s Why I Can’t Go All the Way.
Why artificial intelligence is a slippery slope for storytellers
There’s no doubting the enormous benefits of artificial intelligence.
I’m no Luddite, even though I admittedly held on to my manual typewriter a little too long before moving reluctantly to the personal computer. But that was more about love than philosophy.
I have no problem appreciating AI’s many uses and advantages. But when it comes to writing and storytelling, I just can’t get there. Here’s one reason: AI gets things wrong.
The Motley Fool, which provides financial and investment advice, reported on Thursday, Aug. 14, that a little-known auto insurance company on June 30 missed its expected earnings by more than 50%, causing its stock to slide by 10.2%. Other outlets shared the article on their platforms, according to a story on business news site Quartz by Hannah Parker.
The Motley Fool issued a correction the next day. Late morning on Monday, Aug. 18, the insurance company issued a press release disclosing Motley Fool’s mistake, and the insurance company’s stock rose by 11.2%. The Motley Fool’s editors said they were “ultimately responsible” for the stories produced by their editorial team, which includes the article’s author, listed as “JesterAI.” Motley Fool uses a variety of Large Language Models and a proprietary system to generate news summaries.
Ironically, the insurance company says it uses “advanced artificial intelligence” to set rates and process claims.
Second-hand
Of course, humans get things wrong, too. But there are ways to fact-check ourselves, or have others review our work. AI is fast and easy. In seconds, it pulls from a vast storehouse of data and information. Like everyone else, I read the AI summary that comes up at the top of a Google search. But few people check the summaries against the original websites or articles that populate the results underneath.
Journalism, including brand journalism, requires that you report your story with first-hand information — from people, documents, data, etc. The problem with AI is that it’s all second-hand. AI is drawing on everything that’s been done by others, good, bad or ugly, typically without their permission.
AI is the ultimate example of copying another student’s homework and making it your own. Sometimes that student is getting an F.
Get this: Amazon is selling almost identical, low-quality AI versions of authors’ books on the platform and pocketing the profits, reports Joanna Sommer on the lifestyle news site InsideHook. How is this possible? Sommer reports that Kindle, which is owned by Amazon, offers a distinction in its Content Guidelines between “AI-generated” content, which must be disclosed to the publisher, and “AI-assisted” content, which requires no disclosure. Scary.
‘Grunt work’
Sommer’s article includes an Instagram post from comedian Rhys James, who rates the five AI versions of his memoir on Amazon. This would truly be hilarious if it weren’t so creepy.
AI company Anthropic will pay $1.5 billion to settle with authors who say the company pirated copies of their work to use it as a training tool for its chatbot, the Associated Press reported on Sept. 5. If approved by a judge, this landmark settlement would be the first of its kind in the era of AI, the AP said.
Companies are using AI for “grunt work” instead of hiring entry level workers, The Wall Street Journal reported. One CEO of a consulting firm said his company chose not to hire an intern this summer and opted instead to run its social media posts through ChatGPT.
For early-career writers, that’s a significant loss. You learn by trying, messing up and having an editor sit with you and explain how you can do it better, not just run it through a program to produce a better (maybe) result.
Anyone who has worked in journalism or communications knows the feeling of having your copy ripped to shreds, covered in red ink or tracked changes. It’s not a pleasant experience, depending on the temperament of the editor, but it is certainly a learning one.
One of the things I love about our Build Better Writers program is that we actually talk with communicators before they write and again after they complete their first draft. We talk through our edits and why we made them. Invariably, the next story from that writer is, well, better.
AI will likely learn how to do all these things, but not yet.
Rocky start
GPT-5, OpenAI’s latest model, is off to a rocky start, The Wall Street Journal reported last month. Users have complained that the chatbot couldn’t answer simple math questions, while others have pointed to the chatbot’s colder tone. Sam Altman, the CEO of OpenAI, has promised to give GPT-5 a “warmer personality.”
The story quotes Juliette Haas, who works with a communications and crisis management agency. She asked GPT-4 to identify companies and individuals who would require her services. GPT-4 suggested she build strong relationships with industry contacts. When she put the same question to GPT-5, it gave her a checklist.
“The AI treated finding distressed companies more like a data-science problem rather than understanding the fundamental considerations of relationships and timing,” Haas told the Journal.
Those are all good reasons to be wary of AI if you’re a writer. But I have one more, and it goes to the heart of how we value writers and their work.
Organizations say that clarity and transparency are central to their business model. So is being cost-effective. If AI-generated content was crap, as it was just a few years ago, choosing between AI and human writers wouldn’t be an issue.
But things have changed. Much of what AI produces is OK, sometimes even pretty good.
And for many organizations, pretty good is good enough.
Jim Ylisela, co-founder of Ragan Consulting Gourp, really likes stories based on humans getting information from other humans. Call him old-fashioned. If you’re interested in our Build Better Writers program (no bots involved), just email Jim for a free, 30-minute consultation.
Follow RCG on LinkedIn and subscribe to our weekly newsletter here.