Any writer who follows blogs has seen advice that certain words “stop” your reader: adverbs, “weak” words, “filter words.” Dialogue tags other than “said.” The word “that.” The word “was.” Writers carry out search and delete missions in their documents, hunting down these toxic words. No one wants to take a chance of alienating a reader.
As some of you know, I am a self-declared rule quibbler. Not that I’m a fan of bad writing, but when I read these sermons from the blog, I wonder what evidence supports their declarations. Has anyone carried out a scientific study of reading and how readers actually process written fiction?
Book sales may be taken as an indicator of effective writing, but as most of us know, buying a book does not necessarily equal reading it or enjoying it. Maybe sales are more an indicator of effective marketing than of brilliant writing.
There are peer-reviewed academic journals on the subject: Reading Research Quarterly, for example, and the Journal of Research in Reading. From my admittedly cursory look at the sorts of articles that appear in them, the main focus of the research they publish is how people learn to read and comprehend written language, and not so much what constitutes compelling fiction.
Is there a way to quantify good writing? Do certain words bore or otherwise alienate readers? How might such a study look?
Here’s my idea and thought process: test subjects are given two different texts of a piece of writing long enough to require a reader’s attention for more than a few minutes. A couple of thousand words, perhaps Chapter 1 of a novel. One version follows all the rules about words not to use. The other breaks them. Both texts have the same storyline, but different vocabularies.
After reading, test subjects would be asked which version of each piece would incline them to read further. But wait — would a single test subject see both samples or only one? By the time they read the second, they will have an idea of the plot, so there would be a spoiler effect. So maybe we have two stories, i.e., four different texts. Each subject gets a text from each story, one that follows the rules and one that does not. Because the stories are different, the “I’ve seen this already” effect is avoided.
But surely it would be necessary to minimize differences in reader preference? The test subjects would have to be matched with their preferred types of fiction. If a subject reacts unfavourably to the genre of the text rather than the words used, the test wouldn’t be valid. Okay, the researcher would have to interview potential subjects so the members of the subject pool would be similar to one another, in how much time they devote to reading, types of fiction preferred, etc.
Carefully devised follow-up questions would be needed to elicit and quantify the effect of specific words on individual reading experiences. Formulating questions for studies is a field of study in itself.
My conclusion: devising, carrying out, and writing up credible experiments is not a simple matter.
The closest I got to an actual study of the kind I’ve envisioned is a paper published in 1988, entitled The Psychology of Reading for Pleasure: Needs and Gratifications. It describes five different studies on different aspects of the reading experience. The two that seemed most relevant to my question examined reading speed and readers’ rankings of texts for preference, merit and difficulty. There was even a study of readers’ physiological reactions to reading different texts. Even a cursory look at this paper shows how complex and elaborate a scientific study of reading can be.
The works from which texts were obtained for testing are varied, including fiction and nonfiction, literary fiction, classics, and genre fiction. Authors include Jane Austen, Saul Bellow, Louis L’Amour, Ayn Rand, Graham Greene, Hunter S. Thompson, James Michener, Ian Fleming, Essie Summers, Arthur Hailey, Joseph Conrad, Agatha Christie, and W. Somerset Maugham. The most recent publication date is 1975.
One thing I found interesting was that some of the books are labelled “trash” by the study’s author. The test subjects showed a preference for this “trash” as pleasure reading material, but at the same time they assigned higher ranks for merit to “elite” works that were harder to read. The final page of the paper shows extracts from three of the works, along with the ratings they were assigned.
Despite labelling certain books as “trash,” the study does not analyze the writing, only the test subjects’ responses to it. While the studies documented in this paper don’t answer my question, they are examples of the kind of effort needed to obtain solid data on reading, and by extension, on writing.
The paper does contain some great academic terms. One that jumped out at me is ludic reading, which means “reading for pleasure.” Books can be called “ludic vehicles.” So, fellow writers, that’s what we’re trying to do: turn our books into ludic vehicles to transport readers into realms of the imagination.
My final thought (for now): Read this blog post, which contains a short piece of fiction that deliberately breaks all kinds of writing rules. I couldn’t stop reading, which suggests the words an author uses aren’t as important as the way she or he arranges them (and a few other factors too, of course).
Is anyone aware of any scientific studies on the effectiveness of specific words on recreational reading? Is there any objective science to back up the “rules” for writers? Or is it just a matter of, “Well, everyone knows…”?