to be published in Evolution and Cognition, 1997
PO, Free University of Brussels, Pleinlaan 2, B-1050 Brussels, Belgium
ABSTRACT. It is argued that the acceptance of knowledge in a community depends on several, approximately independent selection "criteria". The objective criteria are distinctiveness, invariance and controllability, the subjective ones are individual utility, coherence, simplicity and novelty, and the intersubjective ones are publicity, expressivity, formality, collective utility, conformity and authority. Science demarcates itself from other forms of knowledge by explicitly controlling for the objective criteria.
KEYWORDS: evolutionary epistemology, selection criteria, knowledge, science.
It is with great pleasure that I use this opportunity to comment on Donald T. Campbell's last paper (1997). I came into contact with Don's work at the beginning of my research career, in 1984, during a conference on evolutionary epistemology at the University of Ghent. Since then, his writings have been a constant source of inspiration. After we met, in 1990, we started to regularly exchange publications. Each time I received a bunch of his papers, I began reading them with much pleasure, because I knew that I would find every paragraph teeming with deep insights and surprising observations. We finally decided to collaborate, producing an ambitious review paper about the evolution of social systems (Heylighen & Campbell, 1995). We were planning to write more papers together, but that has been made impossible by his untimely death in 1996. I see the present paper as an opportunity to somehow continue my collaboration with Don, adding my insights to his in a collective publication.
As may have become obvious, there is virtually no disagreement between my philosophical position and the one of Donald Campbell. The differences in approach have more to do with theoretical background and methods than with aims or convictions. While Campbell (1974) called his philosophy of knowledge "evolutionary epistemology", I would characterize mine as "evolutionary-cybernetic epistemology" (Heylighen, 1993). "Cybernetic" refers here to the broad domain of cybernetics and general systems theory (Ashby, 1956; von Bertalanffy, 1968), and its transdisciplinary study of organization, communication, control and modelling. This epistemology is part of the larger evolutionary-cybernetic philosophy which, together with others, I am trying to develop in the Principia Cybernetica Project (Joslyn, Heylighen and Turchin, 1993; http://cleamc11.vub.ac.be/). Compared with a purely evolutionary approach, a cybernetic epistemology puts more emphasis on the structure of cognitive systems, on the processes by which they are constructed, on the control they provide over the environment, and on the communication of knowledge. Such a cybernetic analysis, for example, allows the reinterpretation of Campbell's (1974) "nested hierarchy of vicarious selectors" as the result of a series of metasystem transitions, producing subsequent control levels (Heylighen, 1995).
The ideas of cybernetics inspired much of Campbell's work, as illustrated by his recurring references to the work of Ashby, his long time support for the perceptual control approach of Powers (1973), and his enthusiastic endorsement of the Principia Cybernetica Project. However, I guess it was a lack of expertise in the mathematical and computational models of cybernetics which kept him from using the "cybernetics" label more explicitly.
A large part of Campbell's (1997) paper, which provides the focus of this memorial issue, is devoted to a discussion of the different selectors which together determine the evolution of knowledge. The main thrust is that the referent, i.e. the external object which the knowledge is supposed to represent, only plays a relatively small part in the selection of a particular idea or belief. In spite of its ideology, scientific knowledge too is the product of multifarious selective forces, most of which have little to do with objective representation of the referent. Of the other co-selectors, Campbell pays special attention to the vehicle through which the knowledge is expressed and to the need to maintain the community which carries the knowledge. In addition to these primarily social selectors, he discusses the selectors of interests and historicity, which function on the individual level.
Whereas Campbell analyses these selectors structurally, that is, by the specific object or component responsible for the selection, I will here try to classify them functionally, that is, by the role they play in the evolution of knowledge. The class of selectors that promote the same type of characteristics can be said to determine a selection criterion. Implicit in Campbell's examples, we can find three superclasses: objective criteria (selection for fit to the outside object), subjective criteria (selection for assimilation by the individual subject) and intersubjective criteria (selection for sharing between subjects). These superclasses can be divided into more fine-grained subclasses. The resulting classification will allow us to highlight the differences between scientifically derived knowledge, which supposedly privileges the objective criteria, and other types of knowledge and belief, where subjective and intersubjective factors play a larger role.
However, as Campbell emphasizes, it is impossible to really separate the different selectors. All the different types of selectors will affect the evolution of knowledge, scientific or other. Therefore, the overall probability for a belief to be selected will be a kind of weighted sum of the degrees to which it fulfils each of the criteria. For example, an idea that scores high on the objective criteria and low on the subjective ones, is less likely to survive selection than an idea that scores high on both counts. In this view, no single criterion can guarantee selection, or provide justification for a belief. We can only use the simple heuristic that the more criteria an idea satisfies, and the higher the degree of satisfaction, the "fitter" it is, and the more likely to win the competition with rival beliefs (Heylighen, 1993).
In such a view, there is in general no single "best" idea. An idea may score high on certain criteria, while another idea scores high on other criteria. Such ideas are in general incomparable. The one is likely to win the competition in certain contexts, but to lose in others. This is similar to the natural selection of organisms: sharks are not more or less fit than seaweed or than shrimps. Each species is adapted to its particular niche within the larger shared environment. However, within the shark niche, some shark designs will be fitter than others. In both organisms and beliefs, "being fitter than" is a partial order, not an absolute one (Heylighen, 1997). Such a philosophy synthezises the relativism of philosophers who emphasize the "incommensurability" of theories, with the more traditional belief in the objectivity of scientific progress.
Since, as Campbell (1997) reminds us, we have no direct access to the "Ding an Sich", we can only use indirect means to determine whether a belief corresponds to an objective reality. Like the constructivist cyberneticians von Foerster (1981), and Maturana & Varela (1987) note, in the nervous system there is no fundamental distinction between a perception and a hallucination: both are merely patterns of neural activation. However, subjectively most people have no difficulty distinguishing dreams or fantasies from perceptions.
To find out whether a perception is real, you should determine whether it is caused by an external referent, or by an internal mechanism (e.g. imagination, or malfunctioning of the perceptual apparatus). According to attribution theory (Kelley, 1967), people attribute causes of perceived effects to those phenomena that covary with the effects. External phenomena will covary with their external causes, but not with changes that only affect internal, subjective variables. This leads to the following criteria for judging objectivity or "reality":
1. Invariance: phenomena should not disappear when the way of perception is changed. The larger the domain over which it remains invariant, the more "real" it will be (cf. Bonsack, 1977). Kelley (1967) proposes the following more specific types of invariance:
a. invariance over modalities: if the same phenomenon is perceived through different senses (e.g. sight and touch), points of view, or means of observation, it is more likely to objectively exist.
b. invariance over time: a perception that appears or disappears suddenly is unlikely to be caused by a stable referent.
c. invariance over persons: a perception on which different observers agree is more likely to be real than one that is only perceived by a single individual.
2. Distinctiveness: different referents produce different perceptions (Kelley, 1967; cf. Campbell, 1992). A perception that remains the same when the attention is directed elsewhere is likely to be produced by the perceptual system itself (e.g. a particle of dust in the eye). Moreover, "real" perceptions tend to be characterized by richness in contrast and detail (imagined or dream perceptions typically are coarse-grained and fuzzy) and to exhibit "Gestalt qualities", such as regularity, closure and simplicity, thus proposing a distinct, coherent pattern, rather than an unstructured collection of impressions (Stadler & Kruse, 1990). Campbell (1966, 1997) too notes that detailed pattern increases the plausibility of percepts.
Extending this logic of covariation, I would like to add the criterion of controllability: a phenomenon that reacts differentially to the different actions performed on it, is more likely to be real than one that changes randomly or not at all. This criterion underlies the method of preparation-detection, which characterizes scientific experimentation. Controllability, however, is to some degree dependent on the observing subject: although I am not able to influence the trajectory of a far-away plane, its pilot is. This leads us to the subjective criteria.
For beliefs to be accepted and retained by an individual, it is not sufficient that they correspond to distinct, invariant and controllable phenomena. A relativistic quantum field model of the beryllium atom may fulfil all objective criteria to be valid knowledge, yet very few people would ever assimilate, remember or pass on such knowledge. Therefore, from a selectionist point of view, the model is rather unsuccessful.
Most obvious among the subjective selection criteria is individual utility. People will only do the effort to learn and retain an idea that can help them to reach their goals. From a long-term evolutionary perspective, such goals and values derive from inclusive fitness. Organisms that assimilate knowledge which increases their fitness are more likely to survive and pass on that knowledge to their offspring. Assimilating useless knowledge, on the other hand, only burdens the subject.
Indeed, the capacity of a cognitive system is limited. Therefore, knowledge should be easy to learn. The most straightforward determinant of learning ease is simplicity: the more complex an idea (i.e. the more components and more connections between components it has, see Heylighen, 1997), the higher the burden on the cognitive system. Simplicity is listed as a subjective criterion, because it is relative to the concepts and associations which the subject already knows. For example, the model of a beryllium atom may seem simple for a physicist well-versed in atomic models, but hopelessly complex for a layman.
More generally, the ease with which a cognitive system assimilates new ideas depends on the support they get from ideas assimilated earlier (this is an example of Campbell's (1997) "historicity"). This requirement for ideas to "fit in" the existing cognitive system may be called coherence (Thagard, 1989). Coherence encompasses connection and consistency. Since learning is based on strengthening associations, ideas that do not connect to existing knowledge simply cannot be assimilated. The preference for consistency follows from the fact that a fit individual must be able to make clear-cut decisions. Mutually contradictory rules ("cognitive dissonance") will create a situation of confusion or hesitation, which is likely to diminish the chances for survival.
Complementary to the conservatism promoted by the coherence criterion is the criterion of novelty. New, unusual or unexpected ideas or perceptions tend to attract the attention, and thus arouse the cognitive energy which will facilitate their assimilation. This is another adaptation, which helps organisms to cope with unusual situations. It shows itself in the exploratory behavior of animals. The corresponding human emotion is curiosity.
Most of the beliefs a subject has were not individually constructed, but taken over from others. This process of diffusion plays an essential part in the selection of ideas. Only ideas that are transmitted frequently are likely to be assimilated frequently. Each time an idea is communicated, it replicates, i.e. is copied into another cognitive system. Thus, ideas can be modelled as replicators similar to genes: memes (cf. Heylighen, 1992). The conversion of an individual to a new belief is in a way similar to an infection, i.e. the passing on of a "cognitive virus".
The first criterion which will determine how often an idea is transmitted is the amount of propaganda or publicity, that is, the effort the subject carrying the idea invests in making it known to others. That motivation largely depends on the other criteria: you will be more inclined to spread an idea if it is simple, useful, novel, etc. However, some beliefs include their own motivation. This is most visible in religions, cults, fashions and ideologies, which often include explicit rules that believers should go and spread the word. This may be explained by "selfish meme" selection (Heylighen, 1992): selection at the level of the meme, which benefits the spread of the idea, but which is useless or even dangerous for the individual carrying the idea. Such ideas can be compared to cognitive parasites, which "hitch a ride" on a cognitive system without caring for the well-being of that system.
All memes, "selfish" or not, need a communication medium in order to be transmitted. Ideas that are easy to express in a particular language or medium will be propagated more easily. This is the criterion of expressivity. It depends on the medium: some ideas are easier to formulate in one language than in another. Thus, like Campbell (1997) notes, the medium will co-select the idea. For example, it is difficult to imagine the evolution of physical theories without the mathematical language in which they are formulated.
The expression of an idea in an intersubjective code or language does not yet guarantee its accurate transmission. All expressions are to some degree indexical: their meaning depends on the context. Different people are likely to interpret them differently, thus assimilating an idea different from the one that was expressed. However, some expressions are formulated in less context-dependent way. The resulting lack of equivocation may be called formality. The more formally an idea is expressed, the better it will survive repeated transmissions. For example, ideas are more likely to be communicated accurately through logic and mathematics than through poetry or painting.
The group equivalent of usefulness may be called collective utility. Some forms of knowledge benefit the collective, while being useless for an isolated individual. Languages, traffic regulations, technical standards and moral codes are examples of cognitive entities that have value only for intersubjective purposes. Such collective ideas will be selected at the group level: groups having such beliefs will be more fit than groups lacking them. This how the supernatural cosmologies characterizing archaic civilisations discussed by Campbell (1997) have been selected.
However, as Campbell emphasizes, such group selection often runs counter to the more powerful and direct force of individual selection. Therefore, he proposes a mechanism that suppresses individually selfish deviations from these collective beliefs: conformist transmission. As illustrated by the mathematical model of Boyd and Richerson (1985), all other things being equal, it seems evolutionarily optimal for subjects to adopt the majority or plurality belief rather than a minority idea. Thus, already popular ideas tend to become even more popular, leading to an eventual homogeneity of belief within a closely interacting group. This selective pressure may be called conformity.
Complementary to this homogenizing influence, we find the diversifying effect of the division of labor. Because of their limited cognitive capacity, individuals within a complex society tend to specialize in a particular domain. As illustrated by Gaines's (1994) computer simulation, this process of cognitive differentiation is driven by a positive feedback mechanism: individuals who were successful in solving a particular type of problem will get more of these problems delegated to them, and thus develop a growing expertise or authority in that domain. The backing of a recognized expert will contribute to the acceptance of a particular idea. This is the criterion of authority.
The integration on the level of norms and codes fostered by conformity and the differentiation on the level of expertise fostered by authority together produce a complexification (cf. Heylighen, 1997) of the social system. The process is similar to the metasystem transition which produced the differentiated organs and tissues in a multicellular organism (Heylighen & Campbell, 1995; Heylighen, 1995).
Table 1: summary of the proposed selection criteria
The selection criteria we discussed are summarized in table 1. When we look at the evolution of scientific knowledge, it is clear that all these criteria play a role in the selection of ideas. The objective criteria obviously underlie the experimental method: new concepts are operationalized by specifying the observations that will distinguish their referents, by subjecting them to controlled experiments, and by trying to find results which are maximally independent of place, time, observer or means of observation (as Campbell (1997) notes, the latter is often difficult in the social sciences). Moreover, subjective interpretation is minimized by formalization of the theories and concepts. However, from otherwise equivalent ideas, scientists will still tend to prefer those that may bring fame and fortune, that are simple, coherent with what they already know, and novel. In addition, the social system of science will prefer ideas that have vocal advocates, are strikingly expressed, benefit the community, and are supported by the majority, or by authoritative experts.
In what way, then, can science be demarcated from other knowledge producing systems, such as religion, fashion or tradition? The difference is that science explicitly promotes the objective criteria. (To a smaller degree, as Campbell (1997) notes, science also tries to neutralize the criteria that are likely to detract from objectivity, such as authority which is not backed by expertise, conformity for conformity's sake, and--at least in the pure sciences--utility.) The objective criteria have been built into the scientific method. They have become part of knowledge itself, rather than an outside force to which knowledge is subjected. The scientific method, in Campbell's (1974) terminology, is a vicarious selector, an interiorization of external selectors.
This vicarious selector functions at a higher hierarchical level than the knowledge it produces. Because other forms of knowledge are not selected at this higher level, they will evolve in a less efficient way, and are therefore likely to be of lower quality. The difference between scientific and other knowledge is not an absolute one, between objective and subjective, or between justified and unjustified, but one of degree, between the products of a systematic process of improvement, and those of a slow, haphazard process of trial-and-error, where neither trial nor error are consciously controlled.
During this research the author was supported as a Senior Research Associate by the FWO (Fund for Scientific Research, Flanders).
Ashby W. R. (1956) Introduction to Cybernetics. Methuen, London.
Bonsack F. (1977) Invariance as a Criterion of Reality. Dialectica 31 (3-4): 313-331.
Boyd R. & Richerson P.J. (1985) Culture and the Evolutionary Process. Chicago University Press.
Campbell D.T. (1966) Pattern Matching as an Essential in Distal Knowing. In: K.R. Hammond (ed) The Psychology of Egon Brunswick. Holt, Rhinehart and Winston, New York), pp 81-106.
Campbell D.T. (1974) Evolutionary Epistemology. In: Schilpp P.A. (ed) The Philosophy of Karl Popper. Open Court Publish., La Salle, Ill., pp 413-463.
Campbell D.T. (1992) Distinguishing between Pattern in Perception due to the Knowing Mechanism and Pattern Plausibly Attributable to the Referent. (unpublished manuscript).
Campbell D.T. (1997) From Evolutionary Epistemology via Selection Theory to a Sociology of Scientific Validity. Evolution and Cognition (this issue).
Gaines B.R. (1994) The Collective Stance in Modeling Expertise in Individuals and Organizations. International Journal of Expert Systems 71: 22-51.
Heylighen F. & Campbell D.T. (1995) Selection of Organization at the Social Level. World Futures 45: 181-212.
Heylighen F. (1992) `Selfish' Memes and the Evolution of Cooperation, Journal of Ideas, 2 (4), pp 77-84.
Heylighen F. (1993) Selection Criteria for the Evolution of Knowledge. In: Proc. 13th Int. Congress on Cybernetics (Association Internat. de Cybernétique, Namur), pp 524-528.
Heylighen F. (1995) (Meta)systems as Constraints on Variation, World Futures 45: 59-85.
Heylighen F. (1997) The Growth of Structural and Functional Complexity during Evolution. In: F. Heylighen (ed.) The Evolution of Complexity (Kluwer, Dordrecht. [in press]
Joslyn C., Heylighen F. & Turchin V. (1993) Synopsys of the Principia Cybernetica Project. In: Proc. 13th Int. Congress on Cybernetics. Association Internationale de Cybernétique, Namur, pp 509-513.
Kelley H.H. (1967) Attribution Theory in Social Psychology, Nebraska Symposium on Motivation 15: 192-238.
Maturana, HR, and Varela, F (1987) The Tree of Knowledge. Shambhala, Boston.
Powers, WT (1973) Behavior: the Control of Perception. Aldine, Chicago.
Stadler M. & Kruse P. ( 1990) Theory of Gestalt and Self-organization. In: Heylighen F., Rosseel E. & Demeyere F. (ed) Self-Steering and Cognition in Complex Systems. Gordon and Breach, New York, pp 142-169.
Thagard, P. (1989) Explanatory Coherence. Behavioral and Brain Sciences 12: 435-467.
von Bertalanffy L. (1968) General System Theory. Braziller, New York.
von Foerster H. (1981) Observing Systems. Intersystems, Seaside CA.