On Cultural Evolution: Building Competence Without Comprehension
Reasons for actions need not to be represented in the mind of an agent in order for his or hers actions to be warranted.
But how can that be? Shouldn't all our behavior, our every act, be the result of following reasons, arguments and evidence— to be justified?
Even if the reasons are not (consciously) present in the agent’s mind during the act itself, it could be argued that the agent must at least be able to subsequently reconstruct and articulate them in order for his or her actions to be rational.
But if we think about it a bit further, we know that’s not the case.
There are numerous examples that show that it is possible to obtain “competence without comprehension”, to borrow a phrase from philosopher and cognitive scientist Daniel Dennett. This may sound unintuitive, but intuition is not always a reliable guide to truth.
Dennett often talks about two "strange inversions of reasoning”: Alan Turing showed us that it is possible to design machines that can perform arithmetic without even understanding what arithmetic is, and Charles Darwin showed us that it is possible to build intelligent machines without the knowledge of how to build them—evolution by natural selection is a mindless and "blind" process that has been building them on Earth for millions of years, by trial and error, by gradual accumulation of traits that contribute to survival and reproduction, and the rejection of traits that do not.
If you look at household appliances around you, for how many of them can you state the causal mechanisms that enable their use? Do you act irrationally if you use these devices even though you are not familiar with the principles behind their operation?
To take few more examples, suppose something like acupuncture works. This alone would not prove that the theory advertised as its explanation is correct—the theory of "life energy" (Chinese concept qi). It is possible that people have stumbled on certain solutions and that they work, without them knowing why they work, and without having the correct causal model. In a similar vein, the way polygraph can be useful is not necessarily by detecting deception but rather by taking advantage of the fact that people believe that’s what polygraphs do.
We have “competence without comprehension” on at least two levels here. First, the process that builds cultural competence can itself be mindless and blind—there does not have to be a “mind” that manages the construction of both biological and cultural competences. Cultural evolution theorists such as anthropologists Robert Boyd, Peter Richerson and Joseph Henrich believe that culture evolves in a process similar to biological evolution. As Henrich says:
…cultural evolution is often much smarter than we are. Operating over generations as individuals unconsciously attend to and learn from more successful, prestigious, and healthier members of their communities, this evolutionary process generates cultural adaptations. Though these complex repertoires appear well designed to meet local challenges, they are not primarily the products of individuals applying causal models, rational thinking, or cost-benefit analyses. Often, most or all of the people skilled in deploying such adaptive practices do not understand how or why they work, or even that they “do” anything at all. Such complex adaptations can emerge precisely because natural selection has favored individuals who often place their faith in cultural inheritance—in the accumulated wisdom implicit in the practices and beliefs derived from their forbearers—over their own intuitions and personal experiences.
"Competence without comprehension" is also present in the sense that users of products of this process—whether it’s a food preparation technique, hunting tool, or decision heuristic —do not have to understand the real purpose or rationale of these “products” for them to perform the function they do.
At his point, it is hard not to think of conservatism which holds that human traditions embody (“conserve”) the knowledge of previous generations and that therefore it is reasonable to follow them even if one does not understand their meaning. Conservative attitude towards social change can be summarized with the so-called Chesterton's fence: just because the reformer does not see the function or meaning of a custom, rule, or institution is not a sufficient reason to abolish that custom, rule, or institution. They may have a rationale that is unknown to humans, and if they have stood the test of time, then the presumption should be that they exist for a (good) reason.
Epistemic and instrumental rationality
The dispute between the reformer and the Chesterton’s fence can perhaps be situated between epistemic and instrumental rationality, two fundamental ways of evaluating beliefs and actions as "rational" or "irrational."
Epistemic rationality aims to attain truth and knowledge and, therefore, evaluates things in relation to norms that are conducive to true beliefs: evidential strength, logical consistency, coherence, parsimony, and the like. A paradigmatic example of epistemic rationality would be science.
On the other hand, instrumental rationality is about choosing the right means to achieve goals, whatever the goals may be. What is epistemically rational does not have to be instrumentally rational and vice versa. Moreover, people show reduced epistemic rationality when they have no incentive to exhibit it. This insight is articulated in a well-known quote by Upton Sinclair: "It is difficult to make a man understand something when his salary depends upon his not understanding it."1
What is a virtue within one type of rationality may be a flaw within the other. Being skeptical is a virtue from an epistemic perspective because it leads us to question arguments and reasons and reject those that do not lead to true beliefs.
However, there are situations in which the lack of skepticism "pays off"; for example, people show selective gullibility at certain stages of life that is adaptive: children have less reason to be skeptical as they are typically surrounded by agents that care about their well-being—parents and relatives. Just imagine what the language learning process would look like for radical skeptics.
Growing up and joining the world of adults is accompanied by a reduction in gullibility since in such a world you are less surrounded by agents that care about your well-being and interests.
As evolutionary psychologist Robert Kurzban points out referencing anthropologist Robert Boyd, the uniqueness of the human species is manifested in the gradual accumulation of information through social transmission, but that transmission crucially depends on gullibility:
Knowledge about the complex task of making kayaks, for example, is stored in people’s heads and subsequently transmitted to other people’s, enabling them to acquire the skills cheaply. This simultaneously eliminates the need for independent discovery and allows the accretion of new information. A key aspect of this trick… is that social learners need to be gullible, willing to adopt the ideas and practices of others credulously. Requiring justification for representations inferred from social transmission can undermine the benefits conferred by social learning.2
In other words, the lack of skepticism, while not an attitude that typically leads to true beliefs, can ensure a high-fidelity transmission of useful cultural norms that would be more difficult to replicate with agents (transmitters) who constantly doubt.
Harvard anthropologist Joseph Henrich in his book The Secret of Our Success cites many examples that show just that. For instance, all traditional cultures develop sometimes highly elaborate and complex rituals and procedures that detoxify otherwise toxic plants in order make them edible. If cassava is eaten untreated, it causes both acute and chronic cyanide poisoning. The Tucanoan tribe in the Colombian Amazon uses a multi-step detoxification technique that lasts for several days. If they are asked why they do what they do, they will not be able to give the correct answer (cyanide removal). Instead, they will give answers such as “this is how we do it” or “this is how our ancestors did it”. The answer may sound unacceptable from the perspective of epistemic rationality; as a “fallacy” in reasoning, an “argument from authority” or an “argument from tradition.” But Henrich urges us to imagine what would happen if a member of the tribe decided to drop some seemingly unnecessary step out of cassava processing; in other words, what would happen if the principle expressed in Chesterton's fence was violated:
She might critically examine the procedure handed down to her from earlier generations and conclude that the goal of the procedure is to remove the bitter taste. She might then experiment with alternative procedures by dropping some of the more labor-intensive or time-consuming steps. She’d find that with a shorter and much less labor-intensive process, she could remove the bitter taste. Adopting this easier protocol, she would have more time for other activities, like caring for her children. Of course, years or decades later her family would begin to develop the symptoms of chronic cyanide poisoning. Thus, the unwillingness of this mother to take on faith the practices handed down to her from earlier generations would result in sickness and early death for members of her family. Individual learning does not pay here, and intuitions are misleading. The problem is that the steps in this procedure are causally opaque—an individual cannot readily infer their functions, interrelationships, or importance.
Alongside skepticism and criticism, consistency is also considered a virtue from the perspective of epistemic rationality. However, there is no guarantee that consistency of beliefs or attitudes will always—or even most of the time— be conducive to goals that are non-epistemic. Being consistently motivated by certain political or moral principles does not seem particularly adaptive3: we should rather expect values and preferences to be selective and “dynamically inconsistent”, because the values and preferences that allow an individual (or a group) to obtain resources or power are not the same values and preferences that allow one to retain them. Example of this is people shifting their stance on free speech depending on whether their group is in power: if you’re out of power, endorse free speech; if you're in power endorse restraints on free speech. In that sense, evidence suggests that people strategically adapt and change principles when it is in their interest to do so: instead of guiding our actions, principles often have the function of justifying actions that exist independently of those principles.
What are beliefs—for?
Intellectuals seem prone to assuming that representation of reality is the only function of beliefs. After all, this is how they intend to use their beliefs—in order to build a “worldview”.
But representational intuitions are also revealed in the metaphors all of us use: we often talk of beliefs according to the model of visual perception. And so our general beliefs about the world constitute a “worldview” whereas a particular belief is called a “view” or a “perspective”.
But consider the analogy with language: we do not use language only to represent reality—we make promises, we congratulate, give orders, make apologies, issue warnings and so on. Instead of using language merely as a mirror to reflect reality, we use it as a tool to modify reality in a certain direction. The same goes for beliefs.
The epistemic criterion for evaluating beliefs asks whether beliefs are true, evidence-based, coherent, or consistent. The instrumental criterion asks what these beliefs do, i.e. what are the consequences of adopting a particular belief in practice?
People often hold epistemically irrational beliefs because doing so is instrumentally rational. A belief may be epistemically irrational, but the agent may form it because it is, for example, a way of signaling something about himself, his characteristics or values—many of our beliefs about “hot” social issues could be construed as signals of group commitment.
Take the example used by psychologist Dan Kahan in the context of climate change:
What an ordinary member of the public thinks about climate change, for example, has no impact on the climate. Nor does anything that she does as a consumer or a voter; her individual impact is too small to make a difference. Accordingly, when she is acting in one of these capacities, any mistake she makes about the best available scientific evidence will have zero impact on her or anyone she cares about.
But given what positions on climate change have now come to signify about one’s group allegiances, adopting the “wrong” position in interactions with her peers could rupture bonds on which she depends heavily for emotional and material well-being. Under these pathological conditions, she will predictably use her reasoning not to discern the truth but to form and persist in beliefs characteristic of her group, a tendency known as “identity-protective cognition.”
By showing that a belief is false, or that it lacks evidence, we have not eliminated the signaling power of that belief, which explains why people are resistant to evidence that contradicts their beliefs. This “resistance” is a flaw from the epistemic perspective, but it can be instrumentally justified. Changing your mind can be hard as it can lead to a change of tribe. It's not just about the "mind".
Suppose you want to convince someone that you have unlimited trust in that person. One way to do that could be by accepting whatever that person tells you—even the obvious lies and nonsense. Suspension of disbelief in this sense could be a way of signaling loyalty and devotion. This could explain why nonsensical and bizarre beliefs often arise in sects, various religious or other groups. These beliefs “do” a certain job; namely, providing evidence of one’s commitment to the group and thus discriminating between different candidates for group membership.
Beliefs can be true or false, weakly or strongly supported by evidence, consistent or inconsistent with other beliefs, but they may also indicate the conditions under which those beliefs typically occur. If some belief typically occurs in people with certain characteristics (personality traits, values, preferences, groups), the adoption and retention of these beliefs could be a way of signaling these characteristics. The way we form beliefs is not independent of the consequences of their acquisition and formation.
There is a tendency among rationalists to view “beliefs” as mental objects whose exclusive function is to represent reality from which it follows that beliefs should be evaluated solely according to their veracity and evidential basis. But on the other hand, it could be argued that “beliefs” are for doing and that, therefore, it often makes more sense to ask what is it that those beliefs do in practice.
To people who assume that epistemic rationality is the only kind of rationality, or the only kind of rationality worth having, large parts of social life will be impenetrable and may therefore demand that those parts of social life conform to the demands of epistemic rationality. Symptomatic of this tendency is an essay from the online magazine Aeon that claims that “belief without evidence is always morally wrong.” However, even beliefs that are not supported by evidence allow people to signal to each other their preferences, values, group affiliations, personal characteristics, etc., thus performing the function of identifying optimal social partners. How much is the collective benefit (or cost) of this process is difficult to estimate or quantify.
Signaling aspects of beliefs
Consider the public outrage that often arises when expressed beliefs are not accompanied with associated cost that is required in order to take those beliefs as credible signals.
For example, environmentalists like Greta Thunberg are often accused of hypocrisy because they travel around the world in vehicles that are major polluters, such as airplanes. The reply is that such accusations commit a reasoning fallacy since the identity of the author for a given assertion (say, “traveling by airplane is morally wrong”), as well as what he or she does, is irrelevant to the question of assertion’s veracity. An assertion is true if and only if what the assertions asserts is true.
But, on the other hand, it is a part of our daily routine to assess the reliability of information based on an assessment of the reliability of the information’s source. When someone conveys to us an information of questionable veracity, we ask “How do you know that?”. Depending on the way the source of information obtained the information, our confidence in information’s accuracy will vary.
In this sense, it is not surprising that people are prone to making the fallacy known as “argumentum ad hominem”, i.e. that the perception of the characteristics of the assertion’s author affects the perception of the assertion’s veracity. What mediates between our acceptance of a particular claim and the person making that claim is the trust we place in the latter.
Why is it, then, that there is a lack of trust in cases where (alleged) hypocrisy causes so much outrage? When we want to convey to others something about ourselves—and that something is usually something positive (e.g. caring for the environment, animals or the poor), something that defines us as desirable social partners—this signaling must carry a certain cost for it to be credible. Because everyone can talk good about themselves— as it is said, talk is cheap.
What distinguishes a trustworthy person from one that is not, is that the former provides evidence of her “goodness”, and this evidence cannot come “cheap”. Otherwise, it would be available to anyone which would bring us back to the initial problem of distinguishing the trustworthy from the untrustworthy, cooperative agents from social cheaters.
The ability to distinguish between agents on the basis of their quality as social partners is a huge adaptive problem for the ultra-social species such as humans. The negative reactions to environmentalists flying with airplanes stem from the fact that observers do not see the associated cost required to construe their beliefs as credible signals.
In this sense, many alleged biases or errors in reasoning show practical rationality. “After this, therefore because of this” can be a useful heuristic if the conclusion “my stomach hurts because of the berries” is derived from “my stomach started to hurt after I ate berries” even though, strictly speaking, it is an informal fallacy.
In the psychology of decision-making, this conflict is evident in the debate between behavioral economists such as Daniel Kahneman and supporters of psychologist Gerd Gigerenzer. A behavioral economist of the first sort may find the business world to be “irrational” given the overconfidence of the agents involved: an analyst who sells his forecasts about the stock market will generally err—otherwise he would not work as a forecaster. But his behavior is ecologically rational in the sense that showing overconfidence is the only way to convince others to use his forecasting services—in short, a good PR.
Cultural evolution and scientific method
Does it follow from all the above that we should “surrender” to the process of cultural evolution? No for at least two reasons. First, cultural evolution, just like biological evolution, is not a perfect mechanism, one that results in optimal solutions. The spine is a poor solution to the problem of supporting load in an upright and bipedal beings such as humans, the cost of which in many people is agonizing back pain. The retina of the eye is set up to leave us with blind spots and there are many other cases of what may be called “unintelligent design”.4
Similarly, there are cultural institutions, which, although passed down through the generations, are not useful to the group. Example of this are the cargo cults such as those found in tribes of Pacific islands that believe that trying to copy runways, or other practices of technologically advanced cultures, will result in material wealth (“cargo”).
And secondly, in a rapidly changing environment, traditions can reflect “knowledge” of a world that no longer exists. Just as the intensity of our preferences for fatty and sweet food is well adapted to the environment in which such preferences evolved, but are mismatched to the modern environment that is rich with such food, or as the color and density of bear fur may become mismatched to new climate conditions, so inherited cultural norms that once responded well to local challenges can equip us today for a world that no longer exists.
In such cases, epistemic rationality emerges as a corrective.5 The development of the scientific method partially frees people from the hazard involved in the trial-and-error learning process and imitation. This is an expensive way of learning because “error” means that someone may have paid with their head. It is also expensive because it requires a long period of time for the selection and accumulation of useful norms and institutions.
What may ultimately be taken as a takeaway message, then, is not some kind of blind deference to cultural evolution and collective wisdom implicit in the rules and customs that have stood the test of time, but rather their recognition as a significant force largely responsible for where we are today.
(First published in Croatian in March, 2020.)
Epistemic rationality can be seen as a subset of instrumental rationality, so the opposition between epistemic and instrumental rationality could actually be construed as the opposition between epistemic and non-epistemic goals, i.e. goals concerning truth and knowledge and goals not related to truth and knowledge.
Cf. Boyd, R., “Cultural adaptation and maladaptation: of kayaks and commissars” and Kurzban, R., “Representational epidemiology. Skepticism and gullibility” in Gangestad, S. and Simpson, J. (ed.). The Evolution of Mind. The Guilford Press.
One exception is in the context of partner choice where people have an incentive to be consistent, or at least appear so, because other people prefer reliable social partners.
Cf. Marcus, G. (2009). Kluge: The Haphazard Evolution of the Human Mind. Mariner Books.
It should be noted, though, that using this corrective presupposes we have enough scientific knowledge to intervene in very complex systems such as the social ones in a way that doesn’t create more harm than good.