Acceptably Bad
Why I'm not worried about AI writing, and not that impressed either
I’ve been playing a lot with AI. I’ve used it for work projects. It makes impressive-looking things quickly, but they don’t feel more useful than things I make by hand. They’re more verbose. AI is very wordy. AI is perfect for most business documents, documents that aren’t meant to be read, and whose value comes more from their existence than what they say.
I’ve used AI to experiment with revitalizing my dormant novel. It was fun, but likely only fun for me. It’d be dull reading. There was a passage about the nature of shoes — “What is shoe?” like “What is justice?” — that went on way too long.

The real deal killer is when I wrote with AI there’s no satisfaction for having written something. I suspect that absence of satisfaction is meaningless to the reader, but it’s meaningful to me. I’ve written things I felt good about that no one else enjoyed, and I’ve been disappointed in some of my writing that other people admired. The words matter, not the writer, or the writer’s feelings about the words.
That’s why I’m puzzled by the sense of betrayal people sometimes have over finding something was written by AI. I remember when word processors were thought to be the devil’s instruments. No one would know how to spell. I couldn’t spell before word processors and can’t today. I carried a little red book to look up words all through middle school. The book didn’t improve my spelling or my writing a whit. I didn’t care then that I couldn’t spell, and still don’t. What I hate is that I can’t write anything with a pen anymore. I have to type everything, and I can’t type, either.
I had to teach diagramming and grammar to 9th graders because of the moral panic of experienced English teachers. They believed diagramming was an essential skill that was dying out. Diagramming and grammar don’t help anyone write well. We care about grammar and usage because using them poorly makes us sound stupid, unless we’re really good at writing. Great grammar never guaranteed great writing. Great AI doesn’t either, though I bet it will one day.
Some of us feel compelled to say, “AI helped me write this.” Soon we may have to apologize for not using AI. It will be like writing with a quill pen or not running spell check. It’s not the tools, it’s the words that matter. Tools just raise the level of what’s acceptably bad. Years ago I could write something illegible with my bad penmanship. Now it’s like, “Why didn’t you type this?”
AI is getting better, but it’s missing something. That doesn’t mean it won’t write better than humans someday, whatever better means. More enjoyable? Funnier? More profound or beautiful or truthful? It probably will. I imagine the most popular writer in the world a decade from now will be an AI. Popularity is always a sign of something being bad. Most people have terrible taste.
Instead of training on what’s already written, a future AI could be trained on what we like and reproduce that. It already does that by checking our responses, just like a dog or an employee. Once AI finds the writing people respond to, it will write better than 99.9% of all writers because it will be popular by design. If “better writing” equals “sells more books,” then AI is destined to win. Nothing can adapt to the tastes of the market faster than AI. That’s why publishers are fools to ban it, but they were fools long before AI.
What’s popular is what’s good, yes? Of course not, but that’s how the publishing business defines good. What AI is missing isn’t genius or greatness. It’s missing, “the boredom, and the horror, and the glory,” as my favorite poets Donald Justice says when he quotes T.S. Eliot. It’s missing humanity because it’s not human. It’s flashing lights, ones and zeros, predictions and training. It’s an imitation of a human voice, and it has no understanding.
A developer friend told me that AI should probably write software code in a language humans can’t read. AI is great at mundane tasks like grammar correction, spell checking, and writing software code. We should work at a different level. AI does the drudgery. We aren’t supposed to be living at the code level when we work with AI. That’s what the AI is for, to write code. We need to know, “What do I want this to be? What do I want it to do?” An artist asks the same questions of their art - or doesn’t. Art benefits from the gifts of happy accidents as much as any other endeavor.
Another idea going around is that AI has no responsibility. Neither do cars or cameras. The people who use AI have responsibility. I have reputational risk every time I submit any work to a client. That risk only grows when I use AI. If it’s done with AI, it better be good. It’s easier to forgive simple errors than it is negligence. If I get hot coffee spilled on me in a McDonald’s drive-through, the jury won’t punish the McDonald’s employee — they’ll punish McDonald’s. McDonald’s is responsible for the work of their employees, and AI is just an employee. It’s no better than its management.
I’m going to conclude with a quote from Montaigne, not because it has anything to do with this essay, but because I love Montaigne, and it’s what I’m reading now.
“The atheists establish, says Plato, by the reason of their judgment, that what is said about hell and future punishments is fiction. But when the chance to test this is offered as old age or illnesses bring them near their death, the terror of it fills them with a new belief through horror at their coming condition.” — Montaigne

