When she writes an email, our journalist Apolline Guillot uses an editorial assistant, which analyses her text and corrects the overall tone. Since she not only relies on this tool but has even acquired certain reflexes from it, does this mean she has become a cyborg of sorts? Is our hybridisation with the machine inevitable, and should it worry us? Drawing on behaviourist and cognitive philosophy, she makes her transhumanist coming out.
When was the last time you had someone proofread an email before sending it? Not so long ago, it was my parents, flatmates, or colleagues who took on the role of external observer responsible for telling me if the message was getting across. In recent years, I’ve stopped asking my loved ones and turned to the trendy “writing assistant.” There are several on the market, including the most well-known ones like Grammarly, ProWritingAid, or Jasper. These aren’t just simple correction algorithms: they point out overused words, suggest more concise phrasing, track repetitions, plagiarism, and circumlocutions. Grammarly even offers to analyse the “intention” of texts using ten different “tones.” Personally, I discovered that my emails had a passive-aggressive, carefree, or arrogant tone when I just wanted to sound conciliatory, enthusiastic, or confident. Fascinated by the accuracy of the algorithm, I also feel like it’s intruding on my privacy.
From behaviourism to algorithms
Grammarly and its counterparts represent the triumph of an intellectual movement that emerged in the 1900s: behaviourism. Coined by the American psychologist John Watson, the term refers to a whole school of psychology that studies the mind based on how it manifests itself (in the form of judgments, actions, emotions), rather than on subjective mental states. From this standpoint, introspection – previously considered the privileged tool for self-knowledge – is as useless to psychology as it is to chemistry or physics. For the proponents of behaviourism, our thoughts serve functions that allow us to interpret external stimuli and respond to them by modifying our behaviour and environment.
‘Recommendation algorithms transform our unthought behaviours into information’
The strength of recommendation algorithms is that they transform non-conscious behaviours into information. No exclamation mark is meaningless, no adjective is chosen at random. We give the algorithm a string of verbal cues, and it analyses them to tell us “this is what you really look like”.
There is something liberating about regaining control over our “verbal behaviour”, as behaviourists call language. It doesn’t matter that you’re internally petrified at the idea of writing an email to your boss to ask him for a raise! Your language’s worth can only be measured by its effects on others – in this case, your algorithmically boosted email will look humble, confident, and will reveal nothing about the state you were in when you wrote it. And sometimes that’s all it takes! The certainty of having followed the protocols and sent a confident email to your superior will undoubtedly be a factor of real confidence when negotiating the pay rise.
Tomorrow, we will no doubt be addicted to intelligent magic wands and tools to check our biases. Before we start feeling outraged, let’s ask ourselves: is it really wrong to delegate some of our daily tasks to machines?
What if our mind were outside of us?
Not so long ago, there wer…
Subscribe to Philonomist and gain free access to all our content and archives for 7 days. You'll also receive our weekly newsletter. No commitment. No bank details required.
You're already subscribed to Philonomist via your employer?
Connect to your account by filling in the following details (please provide your professional email address).