There’s no getting around the fact that LLMs and AI and ChatGPT have revolutionized my profession. It’s now possible for anyone, with a few carefully crafted little “prompts” or questions, to generate a wall of text for their cover letter, blog or “research” paper. I typed this out like a Luddite, but you don’t have to stare at a blank screen for too long. The words will keep coming until you tell Bing or Rosemary or Gemini or whatever these things are named nowadays to stop.
HOWEVER, my writer friends, I would strongly advise you to take a breath before hitting Publish or Print, and look at the words. Read what they say. Edit that robot copy with human eyes and intelligence. It may seem like magic, but it is unlikely to be perfect. Here are some things I’ve learned to look for when dealing with AI copy.
Spelling and grammar
These are the errors that will stop readers in their tracks. Mainly speaking about English here, which is not a monolith but comes in many varieties – US, British, Canadian, Australian and more. There are regional differences in both spelling (-or vs. -our, -ize vs. -ise endings, for example) and usage of certain words (elevators and lifts, trunks and boots, oh my!). An AI-generated text is likely pulling from all kinds of sources, so decide which style you’re using and be consistent. Running a spell check can help, but be sure to actually read the words, too, as it won’t catch everything.
Sense
I’ve often seen AI-generated answers that attempt to answer a user’s very specific question. They sound convincing, until you check the source and realize the original copy was talking about a completely different thing. Be curious about everything the AI says, and ask yourself if it makes sense. Do a quick search and see if it lines up with a generally acceptable answer that’s already out there. Wikipedia, IMDb, and the like are your friends. Be skeptical of those Quora and Reddit threads that get pushed to the top of search results. (I am a Reddit addict, but it’s a spicy stew of people and bots, so not exactly reliable…)
Be especially curious about anything to do with health or money or legal questions, or recipes and how-to processes. Are they complete, with all possible steps represented? The answer is likely no. You have to look at what AI spits out as a starting point, at best. I’ve seen plenty of car repair and home DIY processes that start somewhere in the middle, leaving out a bunch of preparatory steps and often the finishing ones too. People can get seriously hurt or screw up their homes with these kinds of reckless instructions.
Repetition
Another thing AI does is repeat itself. Watch out for sentences that begin and end with the same sort of wording, such as a introductory phrase and then a list that ends with a summation that echoes the introduction. If a list is longer than 4 or 5 items, look for words with similar meaning. Again, AI may be pulling information from all different sources and combining them, so it repeats the same ideas in different ways. Look for ways to cut and combine them. Your reader will thank you for not making them wade through Bing’s babble to get to the point.
Fictitious sources
Formatting citations and references in academic papers is nobody’s favourite thing to do. It’s great that automatic tools exist to line up that precious order of author, date, title, journal, issue number, page numbers, etc. But they won’t check whether the source ACTUALLY EXISTS. AI makes up not only references to academic journals, but legal cases and health studies. You still have to cross-reference by doing a search of the title (yay, Google Scholar). If you can’t find it there, try to locate the journal online and look at the table of contents for that issue to see if a paper of that title by those authors is in it.
Luckily, almost all legit journal articles have something called a DOI number attached to them. If your reference has one, and links to a matching source, you’re in the clear. If not, get curious, and if in doubt, delete. Sorry.
Your voice matters!
AI is a handy tool, but one of the reasons not to love it is that it writes beige, bland, and boring copy. It’s programmed to be objective and factual, like a textbook. As a person who evaluates AI text as part of my job, I’m supposed to be on the lookout for any traces of swearing, slang, stereotypes and harmful ideas. It’s supposed to be safe and inoffensive to as wide an audience as possible. Which is all well and good, but it doesn’t promote novel new ideas or individual voices either.
I think you have to ask, when considering how to use your AI-generated bounty of words, is, “Does this sound like something I (or my organization) would say?” Take some time. Mix it up. Edit out phrases that don’t ring true for you, or that don’t sound true at all. Add your own flavour. Let the robots do some of the heavy lifting of drafting, so you can have fun with revising and customizing it. Or have an editor do it. I’m here for you!
