top of page

Language & Technology:
Is Predictive Text Leading to Predictive Thought?

     What makes something human? Beyond Aristotle’s description of humans as rational animals, Melissa Hogenboom, author of the BBC article, “The Traits that make Human beings unique,” states that “complex reasoning abilities…language-learning abilities…superior social skills…a unique ability to understand the beliefs of another person…[and] a fundamental urge to link our minds together” are human-specific characteristics. Even more uniquely, though, “Ever since we learned to write, we have documented how special we are…we are the only ones who peer into their world and write books about it” (Hogenboom). We have a record of the past because of writing, art, and language. Without it, we wouldn’t have advanced so differently from other animals. Something else that’s inherently human is laziness. No matter how ambitious, creative, insightful, and different we are, we still want things to be as quick and easy as possible. Sometimes we sacrifice quite a bit just to save time and energy, and something that is often sacrificed is the ability to actually be human and do the very things that make us human. Spellcheck, Autocorrect, grammar check, Grammarly, Predictive Text, Autocomplete, Smart Compose, and Smart Reply (which I will generally refer to as “Language-Aiding Technology”) all affect our ability to use and process language, and all the while these technologies are attempting to replace very human skills: reasoning, language, sociability, empathy, and, of course, writing.

     Language-aiding technology works by analyzing large pools of data (word combinations that are frequently used together, dictionaries, words that are frequently misspelled), context (what you’ve already typed), and personal style (what you typically write) to attempt to create accurate and cohesive sentences. The algorithms used make predictions to form words and sentences that would “make sense” according to what it already knows (what other users tend to type; what’s most popular). Gideon Lewis-Kraus, author of “The FASINATNG... Fascinating History of Autocorrect” explains that “petabytes of public words are examined to decide when a usage is popular enough to become a probabilistically savvy replacement. …keyboard proximity, phonetic similarity, linguistic context” all factor into the computer’s suggestions, but it comes down to popularity and probability. All of these language-aiding technologies learn more about language the more we use them, so any new and creative human-generated writing is only contributing to the pool of data used to broaden the range of computer-generated writing. The more we try to be creative, the more the computers understand about humans. If everyone starts writing like F. Scott Fitzgerald, Walt Whitman, Charles Dickens, the Bronte Sisters, Sylvia Plath, and Virginia Woolf, computers will learn to write like that, too. How unique and creative are humans really, then, if we can teach a computer to imitate us so well that it’s almost indistinguishable? Would you be able to tell if someone sent you an email or text message with Predictive Text or Smart Reply? Ever since Allan Turing created computers (“machines”) seventy years ago, computers have been learning how to be human, and the more they learn, the closer they get to passing the “imitation game.” The more we interact with machines, the more similar we become.

     If you’ve ever used a cell phone to type anything, you will notice a string of text that pops up above your keyboard that changes based on what you type. It makes for quick and easy responses. Someone texts you “Thanks so much!” and you don’t even have to type – the phone will have “You’re welcome!” prompted at the top of your keyboard. This Predictive Text technology is also integrated into email servers. In John Seabrook’s “The Next Word” – an article featured in The New Yorker about AI and Predictive text – Paul Lambert, who oversees Smart Compose for Google explains:

“At any point in what you’re writing, we have a guess about what the next number of words will be,” Lambert explained. To do that, the A.I. factors a number of different probability calculations into the “state” of the e-mail you’re in the middle of writing. “The state is informed by a number of things…including everything you have written in that e-mail up until now, so every time you insert a new word the system updates the state and reprocesses the whole thing.” The day of the week you’re writing the e-mail is one of the things that inform the state. “So,” he said, “if you write ‘Have a’ on a Friday, it’s much more likely to predict ‘good weekend’ than if it’s on a Tuesday” (Seabrook).

While composing an email in Outlook or Gmail, text predictions pop up to complete your word, phrase, thought, or sentence as you type. All you have to do is press the Tab key and your sentence will be filled in. This certainly saves time – “Smart Compose saves users altogether two billion keystrokes a week” (Seabrook) – if it accurately predicts what you were going to say, but it also has the potential to sway or change what you actually do say. Seabrook recalls:

Typing an e-mail to my son, I began “I am p—” and was about to write “pleased” when predictive text suggested “proud of you.” I am proud of you. Wow, I don’t say that enough. And clearly Smart Compose thinks that’s what most fathers in my state say to their sons in e-mails. I hit Tab. No biggie.

And yet, sitting there at the keyboard, I could feel the uncanny valley prickling my neck. It wasn’t that Smart Compose had guessed correctly where my thoughts were headed—in fact, it hadn’t. The creepy thing was that the machine was more thoughtful than I was (Seabrook).

Does this mean that computers are becoming more thoughtful than we are? Of course it doesn’t. But every day they get closer. Soon enough, we won’t have to type any emails or text messages at all, as Predictive Text and Smart Reply will have a response ready for us. Our own personal secretary. It sounds nice not having to be bogged down with thinking out and typing a thoughtful email, and instead just clicking a few buttons and sending it away. But what does that do to our language skills and our ability to think creatively? “We don't speak through language…language speaks through us. We are merely its clumsy-thumbed vessels” (Lewis-Kraus). Our face-to-face social skills have already been affected by the ubiquity of text messaging; in real life, we don’t have the luxury of leaving someone on read for an hour while we think of how to respond, and we certainly don’t have text bubbles prompting us with a mostly-accurate response – we have to come up with responses on the fly. If we become reliant on Smart Reply, Smart Compose, and Predictive Text, won’t that affect our ability to actually come up with our own responses? To actually use language, reasoning, empathy, and sociability?

     Have we given Artificial Intelligence the right to our own thoughts and creative processes? “We write something and immediately take responsibility for it; we see something in the world and, as charitable interpreters, want to believe that it contains meaning” (Lewis-Kraus). The way we write is our personality on a page. Ourselves in text. Here I am, typing this essay and trying to come up with a slew of words that both make sense and sound like me. I take pride in what I write and often will spend an hour or more composing the perfect email, just to make sure that it’s clear and it can’t be misconstrued in any way. Upon hearing the frequency in which my peers use Smart Reply I was shocked. I write my own emails! Anything else is fraud! It’s plagiarism! How could you let the machine speak on your behalf?! When using Smart Reply, Smart Compose, and Predictive Text, you’re ultimately letting the computer think for you. How utterly disingenuous. Don’t know what to say? That’s ok, leave the human-to-human contact part up to the computer. It might come up with something more empathetic and genuine than you would’ve.

      Autocorrect and spellcheck affect language in a different way than Predictive Text, Smart Reply, and Smart Compose. Similarly, they were created to streamline the writing process. “The whole notion of touchscreen typing, where our podgy physical fingers are expected to land with precision on tiny virtual keys, is viable only when we have some serious software to tidy up after us” (Lewis-Kraus). Writing is creative and artistic, but it is also technical and methodical – “a little bit of creativity and a whole lot of scutwork” – according to Dean Hachamovitch (qtd. in Lewis-Kraus). No one wants to slow down their typing to the point where they have to make sure every single word is spelled correctly, which is why the autocorrect/spellcheck feature was created in the first place. Leave the technical stuff to the computer and the creative stuff to the humans. But it’s gotten to the point where spelling hardly matters because we know the computer will fix it for us; Gideon Lewis-Kraus emphasizes this point by saying “because we know autocorrect is there as brace and cushion, we're free to write with increased abandon, at times and in places where writing would otherwise be impossible.” I don’t even have to worry about words like “becuase”, “reccomend”, or “languare”– my computer will correct them automatically (keeping those words spelled incorrectly was a much more daunting task than letting them be automatically corrected). This is a fairly common tactic, as John Seabrook from The New Yorker also says, “Now that spell-checkers are ubiquitous in word-processing software, I’ve stopped even trying to spell anymore—I just get close enough to let the machine guess the word I’m struggling to form.” But how often are these suggestions and corrections incorrect? How often does spellcheck suggest “defiantly” for a misspelling of “definitely?” While typing “youre,” the first suggestion to come up is “your” instead of the more obvious (and correct) “you’re.” Yes, spellcheck suggestions are just that: suggestions, but they’ve become staples for writing. As a result, we are losing our ability to spell altogether (and the ability to spell “altogether.” “All together?”). Without the aid of spellcheck, we’d be making errors all over the place, and probably wouldn’t even notice it. While writing by hand, relatively commonplace words suddenly become impossible to spell because nothing is there to fix them for you (is it “dissapear” or “disappear?” “appearance” or “appearence?”). Spellcheck is a crutch, and we all rely on it.

     Then there’s Grammarly and grammar check, who insist that artistry be taken out of writing. These programs absolutely hate the use of double words and passive voice, and they always want to cut down on words and say everything as concisely as possible. Write a fragment? End a sentence with a preposition? Use a few too many commas? Say hello to that friendly little blue line telling you to change your words around to be less unique, and more “correct.” Yes, these tools are useful if you’re just learning, but at what point do they make writers plateau into just writing according to what grammar check & Grammarly deem as correct? Writing in the style of “typicalness” instead of uniqueness or creativeness, much like how Smart Reply & Smart Compose want us to write. Often times, grammar check & Grammarly will highlight something it deems “incorrect” where it’s very much a style choice or personal preference (“have to” is always suggested to say “must,” “hand-in-hand” should be “together”). I typically use a lot of extra words and probably (definitely) too many commas in my writing, but I do that to make the sentence flow the way I want it to. To make it sound like myself. It’s tempting, though, to change all of my “incorrect” phrases to appease grammar check, (see, computer? No errors here!) but I would be giving up my style and voice, and I would essentially be telling the computer that its writing style is better than mine. If we all abide by the suggestions from the computer, we are choosing to write like computers. We have deemed the computer’s composition to be the best option, and because computers learn from everyone, becoming more like computers also means we are becoming more like each other.

     Kurt Vonnegut’s 1961 story “Harrison Bergeron” takes place in a world where everyone is made to be equal, and individuals receive different specific handicaps to ensure that they are equal to everyone else; someone with superior beauty, intelligence, or strength is burdened with more handicaps than someone who is naturally average. Today, Smart Reply, Smart Compose, Predictive Text, Spellcheck, grammar check, and Grammarly all act as those handicaps for writers. These technologies will suggest you take your writing in a “traditional” route, meaning, the route that is more popular, more average, more common, and more computer-like. Writing is becoming less personal and human and more generic and mechanical as language-aiding technologies all standardize writing to the point where everyone’s writing is in the same style. The new writing standard isn’t creativity or uniqueness, but typicalness and predictability. These technologies have all “become an index of the most popular way to spell and order certain words” (Lewis-Kraus). Whatever is most popular becomes the standard. Yes, this is how language has always worked, but is learning to write and speak in the most “popular” or “common” way really what we want? To lose all sense of identity in writing and sound like everyone else? To be indistinguishable from a computer? Is this just the next piece in the evolution of language? Or are we handing the human act of writing over to technology?

     If we are handing writing over to technology, what does that mean for language? Word predictions and replacements are almost always polite, sensible, agreeable, and concise. As Natt Garun of The Verge explains when using Gmail’s Smart Reply, “the responses tend to veer toward affirmative answers, so they may not work best if you’re less prone to agreeing to everything.” John Seabrook would have to agree, as sending “I am proud of you” is much nicer than “I am pleased.” Language-aiding technologies not only make what we say more common and typical, but also more tame and censored. To add another dystopian story to the mix, George Orwell’s Nineteen Eighty-Four sheds light on the problem with censoring and altering language:

We’re getting the language into its final shape—the shape it’s going to have when nobody speaks anything else. When we’ve finished with it, people like you will have to learn it all over again. You think, I dare say, that our chief job is inventing new words. But not a bit of it! We’re destroying words—scores of them, hundreds of them, every day. We’re cutting the language down to the bone (Orwell, 66).

Cutting language down to the bone. Simplifying language. Have you ever seen language-aiding technologies suggest a word like “superfluous” when you were trying to say something was excessive? No; it would suggest something much more ordinary, making writing overall more mundane, characterless, and drab; diluting everything into pablum discourse.

Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed, will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten (Orwell, 67).

If we continue to rely on computers for aiding our writing and language skills, our language skills will continue to deteriorate, and the gap between humans and computers will continue to shrink.

     Humans do amazing things: build cities, cure diseases, write books, give speeches, compete in sports, explore the world, give lectures…all things possible because of our use of language, reasoning, empathy, sociability, writing, and our desire to teach and learn. But with all of that ambition comes laziness, and we have created technology that can be taught to do the dirty work for us. Spellcheck, Autocorrect, grammar check, Grammarly, Predictive Text, Autocomplete, Smart Compose, and Smart Reply all affect how we use language in our daily lives. While computers are learning language from us, we are learning language from them. If the computer tells us we’ve made a spelling mistake or should probably phrase something this way and not that, we’re inclined to listen. If the computer prompted a better response than I was going to say, why shouldn’t I let it? We have used our uniquely human traits to teach computers how to imitate and learn those very same traits. It seems harmless, but we are giving computers permission to learn how to be us. Does that mean we are losing our uniqueness and, thus, a bit of ourselves in the process? Giving the computers the ability to tap into the things that makes us human only makes them more human. Letting our words and language be dictated by computers only makes us more like everyone else. Once computers understand and comprehend language – or once we’ve all learned and begun to speak like computers – there will be one fewer thing separating computers from humans and humans from each other. In the course of seventy years, computers have gone from decrypting secret codes to understanding how words work together. Imagine what will happen in the next seventy years.

Works Cited

Garun, Natt. “How to Enable and Use Gmail's AI-Powered Smart Reply and Smart Compose Tools.” The Verge, The Verge, 6 July 2020, https://www.theverge.com/21315189/gmail-ai-smart-reply-compose-tools-enable-turn-on-how-to .

Hogenboom, Melissa. “The Traits That Make Human Beings Unique.” BBC Future, BBC, 6 July 2015, https://www.bbc.com/future/article/20150706-the-small-list-of-things-that-make-humans-unique

Lapowsky, Issie. “Google Autocomplete Still Has a Hitler Problem.” Wired, Conde Nast, 10 Feb. 2018, https://www.wired.com/story/google-autocomplete-vile-suggestions/

Lewis-Kraus, Gideon. “The FASINATNG... Fascinating History of Autocorrect.” Wired, Conde Nast, 22 July 2014, https://www.wired.com/2014/07/history-of-autocorrect/

Orwell, George. Nineteen Eighty-Four. 8 June 1949. PlanetEbook.com.

Ronan, Brigid. “Of Thumbs a Predictive Text Essay.” Pleiades: Literature in Context, vol. 41, no. 1, 2020, pp. 133–139., https://doi.org/10.1353/plc.2020.0223

Seabrook, John. “The Next Word.” The New Yorker, 14 Oct. 2019,

https://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the-new-yorker

Vonnegut, Kurt. Harrison Bergeron. Mercury Press, 1961. 

TRAIN OF THOUGHT

Contact
bottom of page