Using generative artificial intelligence without a policy in place is an accident waiting to happen, consultant Tom Corfman writes.
WB Kids/YouTube

12 Questions to Help Set Your AI Policy

Your organization is already using AI. Do you have guidelines to govern its use — and avoid disaster?

Among public relations pros, 64% say they are already using generative artificial intelligence, but only 22% say their companies have an AI policy, according to a new survey.

Communication teams should be leading in this area. Instead, they are lagging behind.

For communication leaders playing catch-up, we’ve assembled a dozen questions to be answered while crafting guidelines. We think the comms team should lead the policy creation because it is ideally positioned to blend and balance the advice from a company’s legal, information technology and human resources departments. Or at least should be.

Here are some guidelines by several governments and news outlets, which are more likely to publish their procedures than corporations, framed by our questions:

1. Who’s responsible? Like their counterparts in journalism, professional communicators should be responsible for the content produced with AI.

The city of Tempe, Ariz.’s “Ethical Artificial Intelligence Policy” says: “We will be responsible for the actions and impacts of the AI technology we use, and we will implement strategies to identify, mitigate and rectify any potential harms or unintended consequences resulting from their use.”

You should also make clear that this responsibility is shared by the people writing the prompts and their managers.

2. What are they responsible for? The risk of error posed by generative AI requires that organizations make explicit certain practices that have long been accepted.

“Fact check and review all content generated by AI, especially if it will be used in public communication or decision making,” according to Boston’s “Interim Guidelines for Using Generative AI,” released in May 2023.

Staff should be trained, or retrained, in how to fact check copy.

Does the content that AI produces sound too good? Mandate precautions against improper copying.

In Silicon Valley, the City of San Jose, Calif. says: “Users shall verify the content they use from any Generative AI systems does not infringe any copyright laws. For example, city employees could check the copyright of text-based content with plagiarism software and the copyright of image-based content with reverse Google searches, although neither of these approaches guarantees protection against copyright infringements.”

In news organizations, well before the AI wave, there was a saying among copy editors looking to catch mistakes: “If there’s a doubt, keep it out.”

3. How do we eliminate bias? We often have our hands full eliminating implicit bias in copy we write on our own. AI makes this worse.

When using AI, employees should use the organization’s existing tools and resources to avoid producing content that contains bias and stereotypes, as recommended by Seattle’s “Generative Artificial Intelligence Policy,” issued Nov. 3, 2023. Those resources will vary by organization.

The policy should also include a commitment by leaders to evaluate AI systems for “potential impacts such as discrimination and unintended harms arising from data, human, or algorithmic bias to the extent possible,” as Seattle proposes to do.

4. Who is using AI? Start by holding staff discussions about the use of AI. What works and what doesn’t? You may learn some things that seemingly conflict with existing policies. Other practices may seem questionable. Try not to raise your eyebrows or furrow your brow. You’re gathering information.

Ongoing internal disclosure is important. For small teams, this can be a short conversation before every assignment. Big teams may need a formal procedure.

5. What are they using? Even well-known AI tools may pose risks unacceptable to your Information Technology department, as demonstrated by the U.S. Dept. of Agriculture’s recent decision to ban OpenAI’s ChatGPT, according to a Dec. 20, 2023, report by tech news outlet Fed Scoop.

Employees are often the first ones to use new tools. Most organizations limit software choices, but the policy should be flexible enough to quickly review employees’ requests for a tool that hasn’t been approved, as allowed by Seattle’s policy, issued Nov. 3, 2023. A quick evaluation without a lot of rigamarole may reduce the usage of unauthorized software tools.

The City of San Jose, Calif., uses this model to determine when generative AI should be used.

6. What pieces should AI be used on? As a matter of policy, it’s advisable to define categories of content where AI can be used and should not be used. San Jose has created a simple risk matrix to gauge the risks of information breaches and negative impacts by using AI.

7. What’s in an image? In San Jose, image generators such as Dall-E can be used “only for illustrative purposes.”

“If you want a picture of the mayor at City Hall, find an actual picture,” the city’s guidelines insist.

Likewise, newspaper publisher Gannett says the company “does not use AI-generated photo-realistic images in our news coverage.”

The newspaper publishing giant also takes a skeptical view toward AI-generated illustrations.

“There is an extremely high bar for the potential use of AI-generated visual content, requiring full disclosure to the viewer,” according to the “USA Today Network Principles of Ethical Conduct for Newsrooms.”

San Jose adopts a more liberal approach to illustrations, with the condition that “any” use of AI in the creation of a video or image must be disclosed, “even if the images are substantially edited,” according to the city’s “Generative AI Guidelines,” updated Sept. 23, 2023.

8. How is AI being used? “Document how you used the model, the prompts you used, etc. It could be helpful to you and your colleagues to better understand how you can use these technologies better and more safely,” according to the Boston guidelines.

This information is expressly included in the reporting form used by San Jose.

9. What to keep out of AI? Key to generative AI are the prompts, the text, question, or information that describes the task for the app or tells it what you are looking for. Some employees may mistakenly believe that a prompt is not a public disclosure. All guidelines should include a strict prohibition on using material in prompts that company policy currently requires be kept confidential.

But here’s the dilemma: The more specific the prompt, the more helpful will be AI’s response. But more detail increases the risk of improper disclosure. Employees shouldn’t be left to operate in this gray area alone.

Create a procedure for quick conversations between managers and writers about what goes into a prompt.

10. How do we use the output? The State of Kansas says: “Responses generated from generative AI shall not … be used verbatim,” according to a memo issued July 25, 2023, by its chief technology officer.

This is in line with The Associated Press, which says, “While AP staff may experiment with ChatGPT with caution, they do not use it to create publishable content.”

11. When do we tell the public? What if AI content isn’t published verbatim?

Most readers want news outlets to label AI-generated stories, but they find those news outlets less trustworthy when they do, according to a study published last month by Benjamin Toff, a journalism professor at the University of Minnesota, and Felix Simon, a Ph.D. student at the Oxford Internet Institute.

Cayce Myers, a communications professor at Virginia Tech University, hedges a bit.  This month he writes that it “likely depends on content, but overall, it is best to disclose when in doubt.”

National Public Radio tells its staff that if AI “played a significant role in your reporting you should share that fact with your audience,” in its “Ethics Handbook.”

Disclosure is required when “a substantial portion of the content used in the final version comes from the generative AI,” according to San Jose. Similarly, Seattle requires disclosure when it “is used substantively in a final product.”

What’s “substantial?” San Jose is studying the question.

12. Who makes the rules? Tempe establishes roles for city departments, including IT, and a Technology and Innovation Steering Committee. Is your policy merely a guideline, which can be disregarded when appropriate, or is it part of the personnel manual? What are the consequences for missteps or violations?

The answers to these questions don’t involve just practical concerns, as artificial intelligence scholar and consultant Lance Eliot has pointed out.

He’s wary of snappy quotes which simplify complex issues about the new technology.

In September 2022, shortly before the AI craze began, he wrote somewhat awkwardly: “It takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI ethics precepts.”

Tom Corfman is an attorney and senior consultant with Ragan Consulting Group.

Contact our client team to learn more about how we can help you with your communications. Follow RCG on LinkedIn and subscribe to our weekly newsletter here.

Similar Posts