The hypothesis of linguistic relativity, popularly known as the Sapir-Whorf hypothesis, proposes that the language one speaks affects the way one thinks. The exact degree of influence language has over thought has been the subject of debate among linguists across the last century but the general trend in recent years has been against the hypothesis. Today we examine programming languages and see how the idea of linguistic relativity may extend into the realm of computing.
The strong form of the Sapir-Whorf hypothesis argues that an individual’s experiences and perception of the world is shaped by the language he/she speaks. An example comes from French, which has two different second-person pronouns: the informal tu and the formal vous. In English, we never have to worry about causing offence simply by saying “you” incorrectly, whereas in French, and many other European languages, speakers must take note of social context before addressing one another. Of course, most of the time this isn’t an active process. Because choosing the correct pronoun happens so naturally among speakers of these languages, one might imagine that they simply view social interaction differently from a speaker of English, who can simply say “you” regardless of whether he is addressing the Queen or his best friend.
There are languages whose model of the world are much stranger, at least for the average English speaker. Proponents of linguistic determinism often point to the Hopi language of Arizona, whose verbs have no tense, and to the Pirahã language of the Amazon rainforest, whose counting system only has the numerals “one” and “two” (larger amounts are simply described as “many”). The Hopi’s non-linear notion of time and the Pirahã’s simplified notion of quantity are so significantly different from ours that surely they must interpret the world in some radically different way.
But in fact, the majority of linguists disagree, finding this view far too absolute. Consider how languages deal with colour. There is nothing inherent about the properties of light that should divide the continuous visible spectrum into discrete colours, yet all languages group certain arbitrary parts of the spectrum and give them names. The key here is that all languages do so slightly differently. If your language grouped the colours blue and green together under one word, how would you perceive a green field under a blue sky differently from a person whose language possessed two different colour words? Would the sky and the field simply meld into a single grue blob? The answer is a definite no. People are certainly able to distinguish differences in colour not mapped by their languages.
In the case of the Hopi and Pirahã, although their languages might have fascinatingly unique ways of describing time and quantity respectively, the reality is that their interactions with the world around them don’t differ a whole lot from yours or mine.
So linguists are hesitant to declare that thought processes could be drastically altered by learning a new language. In the field of software engineering and computer science, the view regarding programming languages could not be more opposite. Programmers will discuss and debate endlessly about which language is the best for a given purpose. Universities carefully select the languages of their courses to best suit the concepts they wish to introduce. Suffice it to say, the idea that the programming language one uses affects the way one solves a problem is widely held.
“A language that doesn’t affect the way you think about programming is not worth knowing”
— Alan Perlis
Programming languages can be grouped into several paradigms, the most common of which are imperative (C, FORTRAN), object-oriented (Java, Ruby), and functional (Lisp, Haskell). Programmers who employ these languages and styles often write programs in different ways. An imperative programmer would use loops and state-changes to create a solution procedure. An object-oriented programmer might model the problem using classes and objects. A functional programmer relies on pure functions that pass data around.
This contrast between natural and programming languages seems rather surprising. A Japanese person, once she has learned 10,000 Swedish words and the rules that govern how they are put together, can then proceed to describe goings-on in Swedish much as she always has. She can read books in Swedish and enjoy them as if they were written in Japanese. But a C programmer, even after learning all the syntax rules of a language like Lisp, might still have significant difficulty following the logic of a Lisp program.
Natural vs. Artificial
The deciding factor may be the special place language occupies in the brain. People simply do not learn to speak the same way they learn other skills. In other words, language is a tool so innate to our species that it is as hard-wired into the human brain as walking upright. Children, after learning to crawl and then walk, acquire their native language at an alarmingly quick rate.
On the other hand, programming languages are learned later in life, usually for a specific purpose. A good analogy might be music. While all cultures have a musical heritage of some sort, music does not occupy quite the same level in the brain as natural language. So the path people take to learning to produce music may have a profound impact on the way they view the art. A child raised learning Beethoven’s concertos on the piano will have a different perception of music from a rock musician raised on the electric guitar. While natural languages are different ways of expressing the same human condition, different musical instruments are different ways of creating different types of music.
“It is not only the violin that shapes the violinist; we are all shaped by the tools we train ourselves to use, and in this respect programming languages have a devious influence: they shape our thinking habits.”
— Edsger W. Dijkstra
The conclusion that follows, then, is that computer language cannot be governed by the same laws as natural language. The fact that both have a similar form — namely, a lexicon of allowed words and some set of syntactical rules that determine meaning — distracts from the fact that one is innately human and the other is constructed for the purpose of representing computational ideas. It is conceivable that natural language might be as universal as it is innate. After all, humankind has had language for something to the order of a hundred thousand years. Until our species evolves to speak in terms of procedures and algorithms, programming languages will continue to be actively learned rather than passively acquired, and they will continue to define their users as much as the torch defines the welder and the brush defines the painter.
The World Color Survey Database, Paul Kay and Terry Regier
Is There a Linguistic Relativity Principle?, Helmut Gipper
Cultural Constraints on Grammar and Cognition in Pirahã, Daniel Everett
To the members of the Budget Council, Edsger Dijkstra