Are Authors Really Using AI to Write Books?

HIT PLAY OR READ THE POST BELOW:

Last week, a survey entitled "How Authors Are Thinking About AI" sent shock waves throughout the writing community, claiming that nearly half of authors are using generative AI to assist with their work.

Given that the use of generative AI has been a huge topic of debate in the book world, this survey unleashed a lot of opinions — most of them extremely negative — with some saying the results were startling and horrifying. Let’s walk through the results of the survey (including its limitations), set the record straight on its key findings about AI use among authors, and highlight what you need to know regarding where the publishing industry stands on generative AI.

First, let's talk about who was included in this survey, since that's going to tell us a lot about the angle of the results and how representative it is of the broader writing community. If you don't know, BookBub is a book discovery service that features free and discounted ebooks, used by both traditional publishers and self-publishing authors for marketing purposes. The survey was sent to BookBub's partners—meaning authors who have used the service to promote their books.

In this particular survey, of the 1,229 authors who responded, the vast majority were self-published; 6% were traditionally published, and 25% had both traditionally published and self-published books. BookBub also provided a breakdown of when respondents published their first book, as well as what genre they write in, indicating that the majority of respondents are writing novels rather than nonfiction. So, based on this, it's safe to say that the majority of respondents to the survey were self-published fiction authors.

On Threads, someone mentioned that the survey was voluntary and optional, sent to all BookBub partners, meaning that it might have been self-selecting to a certain degree. Perhaps more authors who are open to using AI responded to the survey, while more authors who are staunchly against it didn't respond at all.

So now that we have all of that context, let's get to the key findings. First and foremost is that scary statistic that caught fire online – the question was, "Do you use generative AI to assist with writing, marketing, or other aspects of your work as an author?" 45% of respondents said they are currently using generative AI; 48% do not currently use generative AI and are not planning to in the future; and 7% are not currently using generative AI but might in the future.

Most of the debate surrounding generative AI involves using it in the actual writing of a book — prompting a chatbot to write an entire scene or even an entire novel, then using what it spits out (or editing it somewhat) and publishing it under your name. Those who are against AI are especially concerned with how the LLM actually generated that writing, because many — if not all — models have been trained on books that were taken without the author's permission. On top of all this, there are also obvious quality concerns, as well as concerns about copyright, because copyright requires human authorship. But let's take a step back, because this statistic alone doesn't indicate that almost half of authors are using generative AI specifically to write their books—so let's dive a bit deeper into the survey.

The survey also asked, "How often do you use generative AI to assist with the following?" and listed several use cases, with the options to select "Never," "Rarely," "Occasionally," and "Frequently." The top use case was research, followed by copy or art for marketing purposes, then outlining or plotting, editing or proofreading, jacket copy, writing, cover art, other, audiobook narration, and finally, translation.

Let's zoom in specifically on writing, since I think this is the area that involves the biggest debate and concern. It looks like just over 25% of respondents who said they're using generative AI report using it for writing "frequently," and just over another 25% say they use it for writing "occasionally." Then, there's a decent percentage who say they "rarely" or "never" use it for writing. So, of the 45% of respondents using generative AI in some capacity, not all of them are using it specifically to write some or all of their books.

Personally, I figured that the top use cases here would be related to research or marketing functions. I was surprised to see the use cases related to the actual generation of the story rank relatively high on the list—not only writing, but also outlining or plotting, and editing or proofreading. These results force us to think critically about to what extent generative AI can and should be used in the book-writing and publishing process, and whether we condemn AI across all of these use cases or just some of them.

If we're open to using generative AI in some capacity, where do we draw the line? Is it ethical to use AI for research purposes, as long as it doesn't actually write any of your story? Can it help brainstorm plot ideas so long as you're the one actually putting the words on the page, or is that considered it taking over a creative function that should be reserved for the human author?

You all know I personally don't think AI can or should replace a human editor, and the same goes for a human author—though I do see a difference between using a program like Grammarly to check for sentence-level issues and plugging your entire novel into a chatbot to ask it for developmental and structural feedback. This study could have been a bit clearer in defining exactly how, and to what extent, authors are using generative AI for these creative functions—something I would be curious to see.

There are some interesting quotes in the article from respondents who say they're conflicted about generative AI use:

  • "It's unethical to write an entire book, but I would be interested in using it to brainstorm copy for ads and blurbs."

  • "My feelings are conflicted. I can see the value in using these agents, but I know it means fewer jobs for all artists."

  • "I've used various AI in the past and found it very helpful, but I hesitate because I don't like what it means for our future. But is it inevitable?"

  • "I think using AI for creating is unethical, but if you're using it to read your own work so you can edit it, I think that's okay."

  • "I'm still thinking about the line between AI output and creativity."

  • "Using AI is not substantially different from asking someone (editor, friend, writing partner) for help and ideas."

  • "It lets me focus on the exciting parts of writing and avoid a lot of the drudgery. It's a great accelerator—I have two books in flight and can probably get them both out in the time it previously took to write one. But is there a line where the effort is no longer a creative endeavor—more machine than heart?"

The vast majority of respondents who say they do not use generative AI cite ethics as the primary reason for avoiding it. Many mentioned the issue of LLMs being trained on copyrighted material that's been used without authors' permission. While this makes sense — and I agree — I was surprised not to see quality concerns as another top reason for avoiding generative AI in the book-writing process.

Finally, the last interesting question from the survey was whether authors who use generative AI disclose that use to readers. Seventy-four percent of authors who use generative AI do not disclose their use. They seem to argue that readers don’t need to know "how the sausage gets made," and that delivering a satisfying story is really what matters most. We have no way of knowing what portion of these authors are actually using AI to write some or all of their books, but it does raise questions about whether and how the use of AI should be disclosed to readers, and how we're going to regulate this going forward.

The quote from the article that resonated with me the most was this:

"I just want to say that I'm not at all threatened by AI. I believe that my readers are interested in reading the human experience, and that is what I give them. AI will never be able to connect with readers the way I do. There's something so incredibly powerful and magical about knowing that there's a person on the other end of the words you're reading—which is completely hollowed out when AI takes over."

Prioritizing that connection to the human experience is exactly what I try to do in my work as an editor as well.

After reading the BookBub study, I wanted to brush up on how AI is being discussed in both the self-publishing and traditional publishing spaces, since so much is changing rapidly. What I found is that there's hardly a consensus or a standard policy. I know anecdotally that since ChatGPT was released, Amazon has been flooded with AI-generated novels, but I wanted to find Kindle Direct Publishing's official policy on AI use. Here’s what I discovered: they don't ban AI-generated content, but they require it to be disclosed. However, what they define as "AI-assisted content" does not need disclosure. Given the results of the BookBub survey, I wonder how much AI-generated content is actually being disclosed.

In the traditional publishing space, generative AI is being treated with extreme caution and skepticism. I couldn't find any official statements from the U.S. divisions of the Big Five houses, but Penguin UK does have an article on their website about the use of generative AI, which states that Penguin "champions human creativity" and "we do not see any technology as a substitute for human imagination," although they do state that they will "use generative AI tools selectively and responsibly." Hachette UK's website states that they are "opposed to machine creativity" in order to protect original creative content produced by humans.

Most publishing contracts require that the work an author submits is completely original to them, so using generative AI could violate those terms — especially considering that, for a work to be copyrighted, it must have human authorship.

Anecdotally, publishing professionals — especially those who work in trade fiction — have been extremely vocal against using generative AI for any creative function. In fact, more and more literary agents are screening for AI-generated material at the querying stage. For example, Andrea Brown Literary Agency's query instructions page states that no AI-generated work will be considered. Increasingly, agents are also including the question, "Was any part of this book or query package created by AI?" on their query forms.

So, if you're hoping for traditional publication—especially if you're writing fiction—your best bet is to steer clear of using generative AI to avoid any problems with your contracts or awkward conversations down the line. But even if you're self-publishing, I'd advise that you be extremely cautious and thoughtful about how you experiment with AI. The Authors Guild has a great guide of best practices on how to use generative AI responsibly and ethically, including how to disclose your use of AI.

It's clear that the AI conversation is far from over, and I'll be curious to see if more publishers put out strict policies surrounding the use of generative AI, particularly in fiction.

Need an expert pair of eyes on your query letter?


View more:


Next
Next

Terrible Writing Advice You Need to Let Go of Now