More Companies See New Risks to Reputation from AI
The Conference Board says disclosures of risks due to artificial intelligence have jumped fivefold since 2023
Companies are rapidly recognizing that artificial intelligence poses a risk to their businesses, a danger that requires the full attention of their communication teams.
Among S&P 500 companies, 72% said AI poses a material risk in 2025, a big leap from 2023, when just 12% disclosed such a risk, according to an analysis of filings with the Securities and Exchange Commission by The Conference Board.
For example, Target, which has been struggling with stagnant sales, disclosed AI risks in its annual report for the first time in March.
“Generative artificial intelligence presents emerging ethical issues and could negatively impact our guests and team members,” the retailer said. “If our use of generative artificial intelligence becomes controversial or is inaccurate or ineffective, our reputation and competitive position could be adversely affected.”
Corporate communications teams are accustomed to preparing crisis plans for natural disasters, product recalls and employee misconduct. We’ll look at the new risks that companies are identifying in their SEC filings. Then we’ll offer four quick tips on how comms teams should revise their crisis plans to prepare for those risks.
Artificial intelligence is expanding the causes of potential crises, including hallucinations and biased responses. Coming on top of the growth of social media, it’s also accelerating the speed at which such events unfold.
“A single AI lapse can quickly cascade into customer attrition, investor skepticism, regulatory scrutiny and litigation — often more rapidly than with traditional operational failures, given how AI errors propagate publicly and virally,” the report by the business think tank says.
Risk factors
To learn how AI is shaping corporate board agendas and investor expectations, The Conference Board examined the “Risk Factors” section of annual reports, called Form 10-Ks, filed by companies included in the S&P 500, an index of leading companies on stock exchanges. Researchers examined 10-Ks starting in 2023 through Aug. 15, 2025.
The law does not require disclosure of all risks, but only those “to which reasonable investors would attach importance in making investment or voting decisions,” according to the federal regulation. The researchers categorized AI risks by type: reputation, cybersecurity, legal or regulatory, intellectual property and privacy.
The total number of AI risks disclosed by S&P 500 companies has jumped fivefold to 408 in 2025, up from 317 in 2024 and 69 in 2023. Some companies disclosed more than one AI risk.
Finance, health care, industrial, tech and consumer companies are leading the jump in AI risk disclosure, although companies in all sectors of the economy are now reporting the risk, according to the report.
Types of risks
Damage to reputation was the most common risk, with 191 companies disclosing such a risk in 2025, up from 141 in 2024 and 31 in 2023. Environmental impact and job losses are increasingly likely to pose risks.
“Companies warn that bias, misinformation, privacy lapses, or failed implementations can quickly erode trust and investor confidence,” the report says.
Agentic AI systems, which act without close human supervision, have yet to show up in the filings, but that will quickly change, the report says.
Cybersecurity breaches were the second most common risk, with 99 companies disclosing such a risk, down slightly from 101 in 2024 but up from 15 in 2023.
Comms teams should already be accustomed to preparing response plans for this risk. But artificial intelligence is increasing the intensity and frequency of these attacks, the report says.
Legal and regulatory issues were the third most common risk, with 63 companies disclosing such a risk in 2025, up from 57 in 2024 and 15 in 2024. These challenges often develop slowly, the result of lengthy litigation and protracted regulatory proceedings.
These risks come in several forms: Compliance with existing regulations, uncertainty about future regulations and the costs of compliance.
Be prepared
If there’s a communications mistake in a crisis, “everyone will notice—because everyone is watching,” crisis expert and colleague Nick Lanyi wrote.
Which means preparation is key. What happens without crisis planning? A worse crisis, as we explain in our guide on crisis communications. We have four quick tips on how to revise crisis communication plans in light of the risks posed by artificial intelligence.
1. Get an AI policy. While 44% of employees say their organization has begun integrating AI, just 30% say their organization has general guidelines or formal policies for using AI, according to a survey by Gallup released in June. That’s a troublesome gap.
We’ve proposed 12 questions to answer when setting up an AI usage policy. Having a policy should tell you how the company — officially at least — is using AI. That’s helpful to spot potential trouble spots. Having a policy should also reduce AI-related mishaps and help when those mishaps occur.
2. Monitor all AI missteps. There’s a lot to learn from all AI mistakes even if they don’t result in a PR crisis. How did they occur? How were they detected? How were they handled internally?
These near misses will provide valuable information when anticipating problems that will demand a public response.
3. Predicting problems. Is the company using artificial intelligence to predict business disruptions such as natural disasters or supply chain problems or financial market turmoil? Develop a procedure to tap into those warnings.
4. Step up social media monitoring. While some crises start with a lawsuit, many percolate on social media before exploding. Artificial intelligence increases the speed at which problems can escalate, as the Conference Board notes.
And that in turn can accelerate the news media’s interest, requiring an even faster response from the communications team. Streamline your approval process for crisis messages and prepare to bring in extra hands if they’re needed.
Like lightning
Computer scientist Andrew Ng is often quoted for a prediction he offered during a talk in 2017 at the Stanford Graduate School of Business.
“Artificial intelligence is the new electricity,” he said. “Just as electricity transformed industry after industry 100 years ago, I think AI will now do the same.”
It’s an apt metaphor in part because AI, like electricity, is both powerful and dangerous. Corporate communications teams should prepare for those dangers.
Tom Corfman, a senior consultant with Ragan Consultant Group, says that it’s good to remember what Rahm Emanuel said, “You never want a serious crisis to go to waste.”
Need help creating or updating your crisis plan? Email Tom to set up a call with him and Nick Lanyi, an affiliate consultant with RCG and an expert in crisis communications planning.
Follow RCG on LinkedIn and subscribe to our weekly newsletter here.
