By Steve Donoghue
A long time ago, I knew a hippy, dippy, trippy writer who also fancied himself a poet, if you can fancy yourself a poet without knowing anything about rhythm, rhyme, scansion, meter, literature, or history and without having any voice, discipline, or even vocabulary of your own. At parties he’d often introduce himself as a poet, and when I immediately afterwards introduced myself as the rightful heir to the Aztec Empire, people usually got the point I was trying to make.
This hippy, dippy, trippy writer was wonderful company at those parties and at smaller-scale gatherings at my apartment, since he had a gentle, funny personality and a way of turning ordinary conversations into something special just by how amusingly odd he could be. Once, for instance, he decided to compose a new poem on his word processor, and while he was taking a break to tell us all about some dream he’d had the night before, one of my dogs happily slapped his paws across the keyboard, making a few lines of jumbled nonsense on the screen – and the hippy, dippy, trippy writer decided to leave them there, lodged in the middle of his poem. When he saw how puzzled we all were, he shrugged and said, “It’ll make sense in context.”
All these years later, I was reminded of that quote by this whole noisy ongoing conversation about Artificial Intelligence, AI. You’ve probably heard some of that conversation, particularly about so-called ‘generative” AI. Normal, garden-variety AI is very good at collating data, solving problems, and even making predictions. It’s normal, garden-variety AI that your airplanes use in order to fly their routes, your video streaming services use in order to recommend you shows, your doctor uses in order to turn your incoherent complaints into a possible diagnosis.
It’s a marvel of logarithmic extrapolation, but there’s another kind of AI: so-called “generative” AI. It doesn’t just suck up data and make calculations. It sucks up data and makes new data.
Generative AI has been in the news lately because if you feed it a bunch of images, it’ll produce its own images in the same visual style. If you feed it some video footage, it’ll turn out matching video footage of its own. And more importantly, for book lovers, if you feed it a gigantic block of text – say, the complete novels of Lawrence Block or Nora Roberts – it’ll turn out gigantic blocks of new prose along the same lines. And after a few tries, after a few misses and tweaks, the AI-generated prose will be all but indistinguishable from the original.
You can see why that would set off a few preliminary shockwaves in the writing world, yes? Living, breathing authors have writer’s block from time to time. They make the occasional demand, through their agents, for better wages or the occasional junket. They sometimes need to be bailed out. And even if they’re well-behaved little angels, they typically can’t write more than a book or two a year. Generative AI doesn’t do any of those things – and it doesn’t need to be paid. No wonder so many writers out there are feeling a bit weak in the knees.
That ripple of alarm hasn’t stopped some eager-beaver companies from jumping at the starting pistol. I read about one of them just recently, a thing called “Sudowrite,” which bills itself as “the non-judgmental, always-there-to-read-one-more-draft, never-runs-out-of-ideas-even-at-3am AI writing partner you always wanted.” Sudowrite has been aggressively advertising itself on various social media, underscoring how friendly it is by highlighting how many published authors they’ve suborned into using their product. Writers keep giving testimonials about how Sudowrite’s generative AI helped them break down scenes, clarify character storylines, or go ahead and write their next six chapters in three hours instead of seven months and four nervous breakdowns.
Since this is the 21st century and we’re all living in the Age of the Bald-Faced Lie, it’s difficult to know how much if any of this marketing campaign is actually true. At first glance, it seems bizarre that any writer would sign on to advertise their cooperation with a piece of technology that could so easily replace them.
But could it really replace them? That’s the key question in this whole discussion, of course. Right now, even the best of those generative AI programs often produce things like that hippy-dippy poet’s dog-paw poetry: if there isn’t a human writer overseeing the process in order to smooth out the nonsense and provide context, there are often glitches, sometimes whole pages of them.
The obvious point here is that unlike human writers, these generative AI programs will just keep improving. This time next year, those glitches will be long gone. And the year after that? Will there be any flesh-and-blood writers still working? Time will tell.