Unprompted

A position statement on AI for communicators

I’m not an AI expert, and this is not a lecture. What follows is shaped by my experience inside the communications industry, personal experimentation, peer conversations, and a steady diet of reporting on AI from sources I trust. I’m not citing studies or footnoting statistics. Take it for what it is: one informed professional’s perspective, offered in the hope that it’s useful.

Two extremes, one story.

So many of us are somewhere in the middle figuring things out a little at a time. We experiment through trial and error. Embrace what works. Discard what fails. We’re anxious and confused and captivated by AI. But the two most vocal camps are polar opposites: the feverishly converted and the indignantly unmoved. Both are reacting to the same oversimplified narrative from opposite directions.

The feverishly converted treat every new model release as a civilizational leap. The indignantly unmoved dismiss the whole thing as hype. And the melodrama from both sides is doing real damage—distorting expectations, short-circuiting smart decisions, and making a genuinely complex moment harder to navigate than it needs to be.

What both camps share, ironically, is a preference for certainty over nuance. The technology deserves serious engagement. The story around it deserves serious skepticism. Holding both at once is harder than picking a side—but it’s the position that will age well.

What the technology can actually do.

First, it’s important to acknowledge that AI platforms (including LLMs) are genuinely useful tools. They can accelerate work, reveal patterns in large volumes of material, and ramp up content production in ways that weren’t doable a few years ago. If you’ve experimented with them, you already know this. If you haven’t, you’re probably feeling the pressure to start.

In my own work, AI earns its keep with research and synthesis—summarizing transcripts, organizing large volumes of material, exploring alternate angles when I’m stuck, generating first-draft outlines on complex projects. These are all things that help me get more done without taking my judgment out of the equation.

Real advances are also happening at the strategic level: tools that help model cause-and-effect relationships in campaign data, adapt creative in real time, or pressure-test strategy before it goes to market. These are legitimate and worth your attention.

I’ve come to believe AI literacy is something we’re all going to need. And it seems like the professionals and organizations getting the most from these tools are those with strong foundations already in place—clear brand guidelines, an instinctive sense of what works, experienced human judgment integrated at every stage, and a culture that treats scrutiny as indispensable.

Where it falls short (and where the hype machine takes over).

What deserves more scrutiny is the grand narrative rising up around AI. Adoption has accelerated faster than understanding, and the gap between what the technology can do and what the industry claims it can do is wide—and growing.

Much of what organizations are calling AI-powered work is workflow automation, dynamic content templating, or rules-based personalization that existed for years under different names. I love the term that’s emerged for it: AI washing. It describes the trend toward attaching the AI label to existing capabilities for competitive positioning. It’s a lazy way to ride a hype machine that’s generating record investment—and most of the industry knows it, even if they’re not saying so out loud.

Then there’s the acceleration problem. The time you save on production gets wiped out by hours of due diligence—fact-checking, error correction, verification. Because yes, AI is that good at making mistakes. And when you scale up content volume, your errors scale up too, even if individual outputs look clean. Sure, one solution is to use AI to clean up after itself. People are creating custom GPTs to do just that. But is that a virtuous cycle or a vicious one? And at what point do we simply need a human as part of the process to see it through?

Another risk: speed and volume becoming primary goals over quality. You get copy that’s just… ish. It reads cleanly and means nothing—generated by a statistical probability engine rather than a point of view. The language is denatured, and your audience will sense it before they can name it. At the same time, people are becoming suspicious of organic content that uses devices like em dashes that are considered AI tell-tales. The instinct to look for authenticity is affecting how writers write (are you afraid to use em dashes now?)

AI platforms are also built to feel like conversations with a knowledgeable, assured entity. That fluency is deliberate, and it’s useful right up until you start conflating articulation with accuracy. The tool sounds like it knows what it’s talking about whether it does or not. That’s not working out great for everyone’s mental health.

And the professional failures that result from this misperception are well-documented and still accumulating: fabricated quotes, invented citations, compliance violations, brand embarrassments—almost always from workflows where human review had been reduced, rushed, or skipped entirely.

Where I’ve landed.

After a few years of experimentation, conversation with colleagues, and watching the field develop, my take is this: genuine curiosity about what these tools can do, combined with clear-eyed discipline about where they fall short.

In my work, AI handles the mechanical parts—research, synthesis, rough material. The writing itself starts before any words get typed: the thinking, the daydreaming, the Post-It-noting, the collaborating, the conversations that eventually become a point of view. That’s not something you can prompt on command.

For copywriters specifically, that means core positioning, tone calls, and the judgment about what’s worth saying and what’s better left unsaid. These require someone willing to own the outcome. Hand those decisions to a tool and you don’t save time—you nullify the value the work was meant to create. That’s what I mean by denatured language: copy with no point of view, no ownership, no consequence. It sounds professional. It reads fine. It means nothing.

Before you ask what you should hand off to AI, ask what you shouldn’t. Engaging with AI can sometimes create more problems than it solves.

What this means in practice.

Due diligence—checking an LLM’s work—is a must. Every time. After every AI output, ask yourself: how do I confirm this independently? That single habit makes the difference between useful AI integration and horrendous mistakes.

Be skeptical of the category-defining language designed to make you feel like you’re already behind. That’s not coming from anybody who’s looking out for you.

From the looks of things, the roles of editors, strategists, and brand stewards are going to become more essential in the months and years ahead—not less—precisely because someone needs to intervene when the machines don’t understand what actually stirs people. The insight that reframes the brief. The line that surprises even you. The instinct to say something everyone else is too cautious to claim.

In the end, the most powerful moments in communication are often unprompted.

* * *

So. What’s your take? I’d love to hear from you.


Q&A: AI in marketing communications

If I can use an AI chat for free, why would I pay for a copywriter?

Ouch but fair. If you need a lot of copy fast, AI might be enough. Where it struggles is anything that takes real judgment: what to say versus what to leave out, which claim will actually land, what tone fits when there’s no safe choice. AI predicts what sounds right. That’s not the same as knowing what’s true, what’s risky, or what your audience needs to hear. When the words matter, you need someone willing to stand behind them.

I’ve gotten some decent results from AI. What am I missing?

Probably not as much as some copywriters would like you to think. AI handles research, rough material, and variations pretty well. Where it gets you into trouble is when the output looks clean enough that you stop checking it. AI gets things wrong with complete confidence—bad citations, made-up sources, copy that reads fine but means nothing. The danger isn’t obvious failure. It’s the quiet mistakes that go live before anyone notices.

So should I be using AI in my marketing or not?

Yes, but go in with clear eyes. The people getting the most from these tools have solid brand foundations, real judgment built into the process, and they check everything. AI speeds things up, but humans are making the calls that matter. That’s what actually works.

What does a copywriter do that AI can’t?

The real work starts before anyone types a word. It’s in the thinking, the conversations, and the gut sense of what this brand should and shouldn’t say. You can’t prompt your way to that. Specifically: positioning, tone calls when the stakes are high, knowing what’s worth saying versus what should stay unsaid. That takes someone willing to own the outcome. Give those decisions to a tool and you surrender the thing the work was supposed to do.

What risk am I probably underestimating?

Copy that seems fine but does nothing. When speed becomes the main goal, you end up with writing that’s correct, reads smoothly, and says nothing in particular. Your audience picks up on it even if they can’t explain why. As AI content fills every channel, the thing getting harder to find is someone who actually stands behind what they wrote.

How do I know when to use AI and when to call you?

Use AI where being wrong doesn’t cost much—research, rough drafts, outlines, options to test. Call a copywriter when the words define the brand, when a mistake has real consequences, or when you need someone to push back on the brief instead of just running with it. I use AI in my own work, for exactly those lower-stakes tasks. The judgment calls stay mine.

* * *

Have a different question? I’d love to hear it.