Skip to content

The Power of the Prompt: How to Plug AI Into Your Content Engines

Ninety-seven percent of business owners believe ChatGPT will benefit their business, including one-third who say they’ll use it to write website content, according to a recent Forbes Advisor survey.

But these business leaders are not recklessly entering their valuable brand insights and assets into these large learning models’ (LLM) data sets. Thirty percent express concern about AI’s potential to misinform their customers or their business, and 31% are apprehensive about its data security and privacy implications.

In addition to those findings, several big brands (Amazon, Apple, Verizon, and Wells Fargo) are setting strict parameters around generative AI’s use.

Assuming you get the OK to give AI a go, you need to determine the best role and implementation for these tools in your content ecosystem.

An AI operations plan will let you make (and share) sound decisions about its use and governance. It also will help mitigate the risks of this rapidly evolving technology.

Some broad considerations for the plan:

  • Strategic: Set governance standards for how AI will and won’t be used across the enterprise. Determine what goals it’s best suited to help achieve and how you will measure its impact against the tech costs and time investment.
  • Operational: AI can tackle tedious, uninspiring tasks, freeing your team to focus on more creative and impactful marketing efforts. Look for areas where it can streamline production, clear workflow bottlenecks, and fill gaps in your team’s skills and capabilities. But don’t overlook ripple effects that may require you to rebalance team roles, add training, or realign collaborative processes.
  • Editorial: AI-generated content can introduce factual inaccuracies and draw faulty conclusions. The limited data sets used to train the tools can result in biased, ethically questionable, or uninspiring content, which reflects poorly on your brand and reduces engagement. To mitigate the risks, make sure you retain (and possibly retrain) your editorial staff to optimize generative AI content for quality, relevance, and audience value.
  • Legal and security: AI-generated content raises several legal concerns, including whether it’s safe to input your brand’s proprietary insights into these tools and whether your brand can claim legal ownership of the resulting content assets. Regulatory bodies are still working through these issues, so work with your legal team to establish clear policies that keep your brand’s secret sauce inside the bottle.

With those preliminary understandings and pitfalls identified, some industry AI experts have some more insights and advice on the issues.

Start with a generative AI strategy, then add guardrails

Your first question on AI’s role in your content operations should be, “Where does it make the most sense to use it?”

Focus on impact

Trust Insights CEO Katie Robbert says you should ensure AI’s incorporation aligns with your company. “I would start with your mission and values and see if using artificial intelligence contradicts them,” she says.

You also should consider how the AI tools’ capabilities work with your marketing and business priorities. “Think about the questions you’ve been unable to answer or problems you’ve struggled to solve,” Katie says.

“To determine the best place to plug AI tools into your content operations, think about questions you’ve been unable to answer or problems you’ve struggled to solve.”

Next, consider where these AI tools can help increase brand value or marketing impact. Will it help increase audience reach or enable you to branch out to new creative areas?

Measure for problems solved as well as marketing impact

Your strategy also should consider how to quantify the value of the tools to your brand.

Most companies measure AI’s impact in terms of time – how much they can save or how much more they can do. That approach measures efficiency but not effectiveness, says Meghan Keaney Anderson, head of marketing at Jasper, an AI content generation tool.

She recommends A/B testing, where you pit AI-assisted content against human-created content on comparable topics. “Figure out which one fared better in terms of engagement rates, search traffic, and conversions to see if [AI] can match the quality at a faster pace,” Meghan says.

Set unified policies

Develop a unified set of generative AI governance policies for managing potential risks and simplifying cross-team content collaborations.

If each team uses different tools or sets its own guidelines, safeguarding company data becomes more difficult, says Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute.

“If one team uses ChatGPT, while others work with Jasper or Writer, for instance, governance decisions can become very fragmented and challenging to manage,” she says. “You would need to keep track of who’s using which tools, what data they’re inputting, and what guidance they’ll need to follow to protect your brand’s intellectual property.”

Generative AI for content operations

Creating marketing content is one way to use generative AI, but it may not be the most beneficial. Consider using it to streamline production processes, amplify your creative resources, or augment internal skills.

Explore efficiency gains

For example, use AI to tackle smaller, time-consuming assignments like writing search-optimized headlines, compiling outlines and executive summaries, and repurposing articles for social posts and promotions.

Jasper’s Meghan Keaney Anderson says this approach frees your team to explore new creative avenues and focus on work they’re more passionate about.

MAII’s Cathy McPhillips says she uses AI tools to balance her workload and help produce the company’s weekly podcast, The Marketing AI Show.

AI tools transcribe the podcast, help with sound editing, and create snippet videos for social media, saving her up to 20 hours a week. “Working with AI tools reduced the time I had to spend on marketing tactics that are critically important to the business but not things I love doing,” Cathy says. “That allows me to do more strategic, critical thinking I want and need to focus on but didn’t previously have the bandwidth for.”

“AI tools reduced my time spent on things I don’t love doing, allowing me to do more things I enjoy but didn’t have the bandwidth for.”

Fill skill gaps and resource shortfalls

AI can also fill in where your team lacks technical skills.

For example, Cathy isn’t a professional video editor. She uses the AI tool Descript to produce this content without learning the complex skill or hiring the required talent. “There is still a need for the professionals in many instances. But in the case of our podcast and webinars, I’m able to do these light lifts,” Cathy says. “A speaker reel? A hype reel for our conference? I save those for the pros.”

Communicate requirements clearly

Introducing AI tools usually requires some adjustments to the way your team works, such as adding steps to a collaborative workflow or rebalancing team members’ roles and responsibilities.

But it could also require you to provide training and guidance. For example, a team member who writes clear, concise content briefs might help train others on the team about crafting effective AI prompts. That knowledge would cut down on time wasted creating multiple iterations of your prompts, Katie says.

Editorial concerns

Even if you approach AI content creation in an ad hoc fashion, you still need guardrails to ensure the team delivers high-quality content.

Check your facts

Generative AI can produce content with misleading or inaccurate information. This can happen if it doesn’t have access to enough relevant data in its training model to respond accurately to your prompt or if the data it used included inaccuracies. So, publishing AI-generated content without careful editorial oversight isn’t wise.

“AI is good at stringing together words, but it doesn’t understand the meaning behind those words. Make sure that you’ve got humans watching out for inaccuracies before your content goes out the door,” says Jasper’s Meghan Keaney Anderson.

“Make sure you’ve got humans watching out for AI inaccuracies before your content goes out the door.”

To manage this risk, she recommends investing in the journalistic skills involved in content creation – editing, fact-checking, and verifying sources and building those steps into your production workflow.

Be mindful of mediocrity

Even if your AI-created copy is editorially impeccable, it can still come off as generic, bland, and uninspiring. Involving your human team at every step of the creative process can mitigate these challenges.

Trust Insights’ Katie Robbert says humans should carefully review and rework AI content output so it communicates in your brand’s distinct voice. To attract your audience, edit to ensure the copy conveys warmth and human emotion.

“Today’s audiences can tell the difference between content created by a person and generic copy created by artificial intelligence,” Katie says.

Though Katie acknowledges AI will learn to write more like a human over time, your brand voice also will evolve. “Your brand doesn’t stay static. Creatives should keep returning to the model to ensure its reflections stay current and accurate,” Katie says.

Beware of biases and ethical issues

Other side effects of working with AI tools may not be as obvious, but they’re potentially damaging to your brand’s reputation and audience trust.

Content that includes biased views or outdated perspectives is chief among those issues. Make sure your team keeps an eye out for this in the editing process. “It’s about making sure that you can stand by what you’re putting out in the world and that it’s representative of your customers,” Meghan says.

Legal concerns/IP security

Generative AI tools introduce tricky legal challenges. Your content team may not be equipped to manage these concerns independently, but they can still be held accountable for them.

Copyright concerns

AI has the potential to violate creative copyrights, which stems from the way data gets collected and used by the learning model. Concerns in this area swing both ways: Brands risk becoming the bad actor that publishes copyrighted information without appropriate citations. They also can have their copyrights violated by others.

Several class-action lawsuits are challenging the way OpenAI acquired data from the internet to train its ChatGPT tool. Earlier this year, stock image provider Getty Images sued Stable Diffusion’s parent company, Stability AI, for copyright infringement. More recently, Sarah Silverman and two additional authors have alleged that ChatGPT and Meta’s LLaMA disseminated copyrighted materials from their books.

While the U.S. Copyright Office has issued guidance that works containing AI-generated materials aren’t subject to the same legal standards as human-created works, this issue is complex – and far from settled.

The onus is on your team to do its due diligence in accurately sourcing text and images generated with the help of AI tools – just as you would do with the content you create.

Privacy violations

Other external issues include maintaining the privacy of audience data typed into your content prompts. “Inputting protected health information or personally identifiable information is a big concern, and it’s something that companies need to be educated on,” Katie says.

If your team relies on open-source tools they find online, those risks may be heightened. “Make sure your team members aren’t using ‘rogue’ tools – ones their business hasn’t sanctioned or that are built by unknown individuals. They may not have the same strict security practices as other AI systems,” Meghan says.

Revealing your brand’s secrets

Here’s another security-related concern: When you type your brand’s proprietary insights into AI prompts and search fields, that information may become part of its data set. It could appear in results requested by another’s prompt for a similar topic.

If your prompt details unannounced products and services, your organization may view it as a leak of trade secrets. It could put you in legal jeopardy and harm your team’s reputation.

Exercising caution and discretion with proprietary data is vital to the safe use of generative AI. “We must be the stewards of our company, data, and customers because legal precedents will lag far behind,” Cathy says.

“We must be the stewards of our company, data, and customers because legal precedents will lag far behind.”

Consider implementing formal guidance on what teams can and can’t include in generative AI prompts. The City of Boston and media brand Wired have published interim guidelines to cover internal activities like writing memos, public disclosures, and proofreading and fact-checking AI-generated content.

The Marketing AI Institute published Responsible AI Manifesto for Marketing and Business. Cathy says she also thinks an internal AI council is a good idea. “It’s a way to gather with change agents from every department regularly to discuss the positive and negative business impacts of onboarding AI technology,” she says.

Put your operational expertise into all your generative AI efforts

The potential for generative AI tools to deliver process efficiency and creative flexibility appeals to overstretched marketing teams. But turning that potential into positive marketing outcomes is a job best managed through your human operational intelligence.   

Jodi Harris

Jodi Harris

Jodi Harris is director of content strategy at CMI. She describes her role as a combination of strategic alchemist, process architect, and creative explorer. Prior to this role, Jodi spent over a decade developing and managing content initiatives for brand clients in the entertainment, CPG, health care, technology, and biotech industries, as well as for agencies and media brands. Follow her on Twitter at @Joderama.