First-Hand Inbuilt Knowledge vs. Epistemic Hand-Me-Downs
“[T]o us mind must remain forever a realm of its own, which we can know only through directly experiencing it, but which we shall never be able to fully explain or to ‘reduce’ to something else.”
Hayek, F. A. The Sensory Order. Chicago: The University of Chicago Press, 1976, p. 194
How do we know reality? How do we know that we aren’t, say, in ‘The Matrix’? The French intellectual Jean Baudrillard claimed to have had difficulty differentiating between cable news programming, video game simulations, and the military-media narrative presented as the Persian Gulf War (Baudrillard, Jean. The Gulf War Did Not Take Place. Bloomington: Indiana University Press, 1995). Baudrillard’s claim may seem a little over-the-top, but we chuckle but in this era of “fake news” it’s not totally off the mark. This is one of the basic epistemological questions that philosophers down the ages have attempted to answer, and because we don’t literally have the ‘red pill’ that will take us out of the Matrix, we can only say with Descartes “I think, therefore I am.”
Fortunately, the very act of ‘doubting existence’ presupposes that there is an existence to doubt, and given that existence exists, and us in it, we must do our very best to make sense of it. Our senses provide us with multiple access points to reality, but not a complete picture.
Assuming that we are not all just ‘brains-in-vats’ stuck somewhere in the basement laboratory of Wilder Penfield, it seems reasonable to suggest that our sense organs and nervous system gather and organize data so that we may flourish. This evolutionary feed-forward and feedback system shapes the world we know via first-hand sensory experience. For example, you know (as well as you know anything) that you are indeed reading these words right now, you know where you are right now, you know what your clothes feel like against your skin, and so on. Such first-hand experience is as certain as things can get.
However, first-hand experience is only one part of the story of how we know. Modern research has shown that we are, in fact, born with a priori knowledge. Our minds are not, as was earlier believed, tabula rasa or ‘a blank slate’. Developmental psychology has shown that infants are born with the ability to recognize continuity, causality, and form. In my own experience, I saw how my infant daughter instinctively ‘knew’ how to cry when she needed something, and there was no learning curve involved, it was automatic or “built-in”.
A more famous example of this ‘baked-in’ knowledge comes from the longitudinal identical twin study done at the University of Minnesota, which suggests that many of our ‘freely’ chosen biases are, in fact, already ‘built-in.’ In the study, the researchers followed identical twins separated at birth over several years. Because they were genetic clones, that is to say, they shared precisely the same genetic code, identical twins separated at birth presented an outstanding opportunity to grasp the impact of nature vs. nurture (since the nature-side is controlled for).
The findings were truly amazing. In Entwined Lives, Nancy Segal describes how, without foreknowledge of each other, these twins named their pets and children the same names, married the same number of times to people with the same name, drove the same car, vacationed at the same location, had the same preferences in food, beverages, and cigarettes, had the same occupations, and so on (Segal, Nancy. Entwined Lives: Twins and What They Tell Us About Human Behavior. New York: Plume Books, 200). These are truly uncanny circumstances and, given the nature of joint probabilities, too highly improbable to be fairly called ‘coincidences.’ The apparent choices of these identical twins seemed driven more by nature than nurture and questioned the bounds of free will. While this doesn’t bode well for the advocates of free will, the Minnesota Twins Study does add another side to our ‘Archimedes’ polygon’[1] of knowledge, bringing us even closer to understanding the realities of our own thought and decision processes.
To take our understanding of in-built knowledge further, scientist and consciousness researcher Benjamin Libet designed clever experiments that effectively timed both the conscious decisions of participants to act vs. the brain activity associated with the physical initiation of behavior and then compared the times. The results suggest that just before the participant made a conscious decision to act, their actions were actually already happening. The implication is that while we may not consciously be the prime movers of our own actions, it seems that there may be some ability to stop or steer these unconscious actions once they are already in motion (Libet, Benjamin. Mind time: The temporal factor in consciousness. Cambridge, MA: Harvard University Press, 2004).
The results of both the Minnesota Twins study and Benjamin Libet’s research are thorny anomalies because they imply that we may not have much “free will” as we thought we had (it is important to note that many the Minnesota study’s participants parted their hair differently and had many other notable differences in life choices). It seems to tell us that somewhere between our genetics and how they react to our environment lies some sort of Bounded Choice Set that we call “free will”. It like being in a restaurant with a predetermined menu to choose from. The steering mechanism suggested by Libet’s study that can stop or steer these unconscious actions may be what we call thought, or the ability to reason, which aims to help us make the best choices given what we know a priori and a posteriori first-hand.
So then, how do we use reason to steer us toward those signals that exhibit the highest fidelity to reality?
Humans are no longer dependent for information upon direct experience alone. Instead of exploring the false trails others explored and repeating their errors, they can go on from where others left off. Language makes progress possible.
Hayakawa cited in Morville, Peter. Ambient Findability: What We Find Changes Who We Become. O’Reilly Media, Inc., 2005
Myths, legends, folklore, and various tropes have brought us knowledge acquired by our very remote ancestors. Add to that the immense amount of scientific material humankind has produced, and we have a massive repertoire of knowledge available to us: the first-hand experience of others—the ancients, as well as contemporaries—made available across time through words.
Tacit and Explicit Knowledge
When we visit a new restaurant, we do not have first-hand knowledge of the food there. We rely on the menu—the menu-maker’s first-hand knowledge made explicit. Chemist/philosopher Michael Polanyi calls first-hand knowledge ‘tacit knowledge,’ and epistemological hand-me-downs (the menu) he calls ‘explicit knowledge’ (Polyani, Michael. Personal Knowledge. Routledge & Kegan Paul Ltd, 1962).
Explicit knowledge is twice removed from reality. It’s reality as understood by the percipient and put into words. Going back to our menu metaphor, we do not consume the menu or the descriptions on the menu. The menu is the menu-maker’s attempt to describe in words the food available at the restaurant. Our satisfaction will depend to a great extent on how our first-hand experience of the food matches the expectations set by the descriptions on the menu. Our expectations are mini-Kuhnian paradigms of sorts (Kuhn 1977). When our expectations are met, we experience satisfaction and proceed unaffected. However, even given uncertainty, when there is an anomaly (that is, a deviation from our expectation), we experience either disappointment or pleasure and (hopefully) revise our expectations. Finance attempts to quantify this phenomenon of deviation around expectation using the statistical concept of a standard deviation (which requires normally distributed data, or a bell curve). Unfortunately, as Benoit Mandelbrot has shown, it seems that seldom is a bell-shaped curve an appropriate metaphor for how chance really works (Taleb 2007). The tails are much fatter. Randomness and unpredictable outliers are out there, waiting to sucker punch you.
Both tacit and explicit knowledge can vary from person to person. The same curry could taste hot to one and mild to another and, accordingly, its description and the reader of that description’s understanding. However, the very best and honest attempt to describe and explain reality in ordinary language (or even in the terms of science and mathematics) is still only the very best attempt. It’s like adding sides to Archimedes’ polygon (see Footnote 1) to create a circle. Language, in short, is a rather less than perfect tool for understanding and communicating about reality, but it is one of the best ones that we have.
That said, humans make mistakes, can fabricate, and are sometimes prone to exaggeration.
If we would speak of things as they are, we must allow that all the art of rhetoric, besides order and clearness; all the artificial and figurative application of words eloquence hath invented, are for nothing else but to insinuate wrong ideas, move the passions, and thereby mislead the judgment; and so indeed are perfect cheats; and therefore, however laudable or allowable oratory may render them in harangues and popular addresses, they are certainly, in all discourses that pretend to inform or instruct, wholly to be avoided; and where truth and knowledge are concerned, cannot but be thought a great fault, either of the language or person that makes use of them. (Emphasis added)
Locke, John. An Essay Concerning Human Understanding. Edited by Roger Woolhouse. New York: Penguin Books, 1997, Bk. III, p. 307
As the sum of explicit knowledge is the result of the epistemological efforts of many human beings, there is the very real risk that we may be seeing the world through the colored lenses of many mistakes, fabrications, and exaggerations; errors that may have been institutionalized and accepted as ‘tradition’, without critical review. That is, we may be living poorly calibrated analogies, and only anomalistics can anchor us closer to reality. When offered competing models of reality, it is often anomalies that determine the winner.
Language and Thought
What is the impact of language on thought and, therefore, knowledge? For Polanyi, it’s everything—the scientist’s language (‘vocabulary and structure’) constrains the questions that can be asked and the questions asked determine the answers that can be discovered (Polyani 1962). Linguists Sapir and Whorf are perhaps the most famous promoters of the idea that your language limits the extent to which you can think (or not think) about a particular subject, although Orwell’s Newspeak in 1984, where the control of language is equivalent to the control of thought, actually has been more popularly influential.
How could you have a slogan like “freedom is slavery” when the concept of freedom has been abolished? The whole climate of thought will be different. In fact there will be no thought, as we understand it now. Orthodoxy means not thinking—not needing to think. Orthodoxy is unconsciousness.
Orwell, George. 1984. New York: Signet Classics, 1950
There are many critics of the Sapir-Whorf hypothesis, most notable among them Stephen Pinker and Noam Chomsky. Although they emphasize that thought precedes language, they do not deny that thought and language are entwined. Cognitive linguists (like Lakoff), philosophers (like Daniel Dennett), and epistemic rhetoricians (like Deirdre McClosky and Alan G. Gross) have raised tough arguments that demonstrate the metaphoric nature of language and lay emphasis on the central role of discourse in the creation and validation of knowledge. Historian of science I. Bernard Cohen shares this reminder succinctly:
This king of use of analogy illustrates what Alfred North Whitehead has called the ‘logic of discovery,’ the path to increasing knowledge, which Maxwell called the process of ‘exciting appropriate mathematical ideas.’ We may take note that in the advancement of all the sciences, in the way that one scientist makes use of the creation of an important new concept in the form of what seems to be a partial or imperfect replication of the original or even a dramatic recasting of the original…
In introducing the concept of ‘logic of discovery,’ Whitehead contrasted it with what he called the ‘logic of the discovered.’ In this case, an analogy is used once a discovery has been made. There are three primary uses of analogy under such circumstances. One is to help explain a difficult concept or principle… Thus, analogy can be used to make something understandable or understood.
A somewhat similar use of analogies is to validate a novel concept, to make it seem plausible… Thus, analogy can be used to support something, to make it seem reasonable, to make it acceptable and accepted.
Closely related to this use of analogy is one in which the likeness is presented as carrying with it a system of values. Because the utilization of analogy moves it further into the realm of rhetoric and because in this function the comparison may involve a single word or image, that is, a brief figure of speech, as well as an extended representation of similarities, analogy employed in this third way may perhaps be designated as a metaphor, even though a clear and strict distinction between analogy and metaphor is neither necessary nor useful. What is especially significant is the transfer of value systems…
Examples from within the fields of the strictly mathematical and physical and natural sciences may come to mind less readily because during the Scientific Revolution of the seventeenth century rhetoric fell into disfavor among thinkers and writers whose prose style was influenced by the ‘new science’ or the ‘new philosophy,’ as it was sometimes called. The advocates and practitioners of the ‘new philosophy’ held that rhetoric had no place in scientific discourse. Science was to be presented in simple language, with clear descriptions of the evidence of experiment and observation, followed by strict induction or deduction. Each step was to be set forth in language that was unadorned and clearly understood—without rhetorical flourishes to distract the reader from the evidence and the logic. This was one of the reasons for the great esteem given to mathematics, which seemed to be the most rhetoric-free form of discourse. Today, however, a number of historians and philosophers of science have come to recognize that the sciences have a rhetoric their own, a set of conventions and assumptions and even rules of discourse which are not provable by experiment or observation or guaranteed by mathematics.” (Emphasis added)
Cohen, I. Bernard. Science and the Founding Fathers. New York: W.W. Norton and Company, Inc., 1997, p. 27-30
Knowing reality is an ongoing process, and central to the acquisition of new knowledge is anomalistics, or the study of the gaps between any particular model and reality. However, to better calibrate our models to fit reality, we need must explore the tools of rhetoric and reasoning.
A Note on the Mechanics of Communication
Communication is expressing our thoughts, and it is successful when our thoughts have reached and been interpreted by the listener or reader as we intended. Claude Shannon (no relation to your author), the Father of Information Theory, along with his colleague Warren Weaver put forth a model of communication that, by removing meaning from the transmission of communication, was able to transmit pure units of information known as binary digits or ‘bits’ from one place to another.[2] Simply stated, Shannon’s model of communication presented information as a discrete unit transmitted from a source, through a channel, into a receiver. By focusing upon the encoding of transmitted information and the decoding of received information, Shannon’s mathematical model of communication allowed for the recognition of the effects that noise had on the information being communicated. Shannon’s contributions changed the world irrecoverably, making information (good, bad, or inaccurate) ubiquitous.
The original model had 6 basic components: the source, the transmitter, the channel, the receiver, the destination, and noise. For example, take our communication via this book. I am the source (with my message about anomaly), this book is the transmitter (encodes my message into printed words), the wholesale market for books is the channel, the retail store is the receiver, and you, my noble reader, are the destination. Noise, in this example, could be any ripped, missing, or marked upon pages that interfere with your receiving the full message I’ve sent. In Metaphors We Live By, cognitive linguists George Lakoff and Mark Johnson reiterate Michael Reddy’s observation that Claude Shannon’s ‘conduit metaphor’ of communication is ubiquitous. They describe it thus; “The speaker puts ideas (objects) into words (containers) and sends them (along a conduit) to a hearer who takes the idea/objects out of the word/containers” (Lakoff, George, and Mark Johnson. Metaphors We Live By. Chicago: University of Chicago Press, 1980, p. 10).
[1] As a way to calculate π, Archimedes used a polygon. A polygon would begin to approximate a perfect circle as the number of sides of the n-sided polygon increased toward infinity. However, no matter how many sides were added, the polygon would still only be an approximation of a perfect circle, but good enough for practical purposes.
[2] Roman Jakobson, a brilliant Russian émigré, refined Shannon’s ‘conduit’ model by reintroducing the importance of human meaning. To bring meaning, reason, and motive back into the model, he added three more elements: context, message, and code. In the context of this book, the English language we’re using is the code, the message is the word/phrase being transmitted, and the context is the referent in reality.