Principia Cybernetica Web

From World-Wide Web to Super-Brain

The present World-Wide Web, the distributed hypermedia interface to the information available on the Internet, is in a number of ways similar to a human brain, and is likely to become more so as it develops. The core analogy is the one between hypertext and associative memory. Links between hyperdocuments or nodes are similar to associations between concepts as they are stored in the brain. However, the analogy goes much further, including the processes of thought and learning. For more technical details, see PCP research on intelligent webs.

Spreading activation

Retrieval of information can in both cases be seen as a process of "spreading activation": nodes or concepts that are semantically "close" to the information one is looking for are "activated". The activation spreads from those nodes through their links to neighbouring nodes, and the nodes which have received the highest activation are brought forward as candidate answers to the query. If none of the proposals are acceptable, those that seem closest to the answer are again activated and used as sources for a new process of spreading. This process is repeated, with the activation moving from node to node via associations, until a satisfactory solution is found. Such a process is the basis for thinking. In the present Web, spreading activation is only partially implemented, since a user normally selects nodes and links sequentially, one at a time, and not in parallel like in the brain. Thus, "activation" does not really spread to all neigbouring nodes, but follows a linear path. A first implementation of such a "parallel" activation of nodes might be found in WAIS-style search engines (e.g. Lycos), where one can type in several keywords and the engine selects those documents that contain a maximum of those keywords. E.g. the input of the words "pet" and "disease" might bring up documents that have to do with veterinary science. This only works if the document one is looking for effectively contains the words used as input. However, there might be other documents on the same subject using different words (e.g. "animal" and "illness") to discuss that issue. Here, again, spreading activation may help: documents about pets are normally linked to documents about animals, and so a spread of the activation received by "pet" to "animal" may be sufficient to select the searched-for documents. However, this assumes that the Web would be linked in an intelligent way, with semantically related documents (about "pets" and "animals") also being close in hyperspace. To achieve this we need a learning

Learning webs

In the human brain knowledge and meaning develop through a process of associative learning: concepts that are regularly encountered together become more strongly connected (Hebb's rule for neural networks). At present such learning in the Web only takes place through the intermediary of the user: when a maintainer of a web site about a particular subject finds other web documents related to that subject, he or she will normally add links to those documents on the site. When many site maintainers are continuously scanning the Web for related material, and creating new links when they discover something interesting, the net effect is that the Web as a whole effectively undergoes some kind of associative learning.

However, this process would be much more efficient if it could work automatically, without anybody needing to manually create links. It is possible to implement simple algorithms that make the web learn (in real-time) from the paths of linked documents followed by the users. The principle is simply that links followed by many users become "stronger", while links that are rarely used become "weaker". Some simple heuristics can then propose likely candidates for new links, and retain the ones that gather most "strength". The process is illustrated by our "adaptive hypertext experiment", where a web of randomly connected words self-organizes into a semantic network, by learning from the link selections made by its users. If such learning algorithms could be generalized to the Web as a whole, the knowledge existing in the Web could become structured into a giant associative network which continuously adapts to the pattern of its usage.

Answering Ill-Posed Questions

We can safely assume that in the following years virtually the whole of human knowledge will be made available electronically over the networks. If that knowledge is then semantically organized as sketched above, processes similar to spreading activation should be capable to retrieve the answer to any question for which an answer somewhere exists. The spreading activation principle allows questions that are ill-posed: you may have a problem, but not be able to clearly formulate what it is you are looking for, but just have some ideas about things it has to do with.

Imagine the following situation: your dog is continuously licking mirrors. You don't know whether you should worry about that, or whether that is just normal behavior, or perhaps a symptom of some kind of disease. So you try to find more information by entering the keywords "dog", "licking" and "mirror" into a Web search. If there would be a "mirror-licking" syndrome described in the literature about dog diseases, such a search would immediately find the relevant documents. However, that phenomenon may just be an instance of the more general phenomenon that certain animals like to touch glass surfaces. A normal search on the above keywords would never find a description of that phenomenon, but the spread of activation in a semantically structured web would reach "animal" from "dog", "glass" from "mirror" and "touching" from "licking", thus activating documents that contain all three concepts. This example can be easily generalized to the most diverse and bizarre problems. Whether it has to do with how you decorate your house, how you reach a particular place, how you remove stains of a particular chemical, what is the natural history of the Yellowstone region: whatever the problem you have, if some knowledge about the issue exists somewhere, spreading activation should be able to find it.

For the more ill-structured problems, the answer may not come immediately, but be reached after a number of steps. Just like in normal thinking, formulating part of the problem brings up certain associations which may then call up others that make you reformulate the problem in a better way, which leads to a clearer view of the problem and again a more precise description and so on, until you get a satisfactory answer. The web will not only provide straight answers but general feedback that will direct you in your efforts to get closer to the answer.

From thought to web agent

The mechanisms we have sketched allow the Web to act as a kind of external brain, storing a huge amount of knowledge while being able to learn and to make smart inferences, thus allowing you to solve problems for which your own brain's knowledge is too limited.

The search process should not require you to select a number of search engines in different places of the Web. The new technology of net "agents" is based on the idea that you would formulate your problem or question, and that that request would itself travel over the Web, collecting information in different places, and send you back the result once it has explored all promising avenues. The software agent, a small message or script embodying a description of the things you want to know, a list of provisional results, and an address where it can reach you to send back the final solution, would play the role of an "external thought". Your thought would initially form in your own brain, then be translated automatically via a direct interface to an agent or thought in the external brain, continue its development by spreading activation, and come back to your own brain in a much enriched form. With a good enough interface, there should not really be a clear boundary between "internal" and "external" thought processes: the one would flow over naturally and immediately into the other.

Integrating individuals into the Super-Brain

Interaction between internal and external brain does not always need to go in the same direction. Just like the external brain can learn from your pattern of browsing, it could also learn from you by directly asking you questions. A smart web would continuously check the coherency and completeness of the knowledge it contains. If it finds contradictions or gaps it would try to situate the persons most likely to understand the issue (most likely the authors or active users of a document), and direct their attention to the problem. In many cases, an explicit formulation of the problem will be sufficient for an expert to be able to quickly fill in the gap, using implicit (associative) knowledge which was not as yet entered clearly into the Web. Many "knowledge acquisition" and "knowledge elicitation" techniques exist for stimulating experts to formulate their intuitive knowledge in such a way that it can be implemented on a computer. In that way, the Web would learn implicitly and explicitly from its users, while the users would learn from the Web. Similarly, the web would mediate between users exchanging information, answering each other's questions. In a way, the brains of the users themselves would become nodes in the Web: stores of knowledge directly linked to the rest of the Web which can be consulted by other users or by the web itself.

Though individual people might refuse answering requests received through the super-brain, no one would want to miss the opportunity to use the unlimited knowledge and intelligence of the super-brain for answering one's own questions. However, normally you cannot continuously receive a service without giving anything in return. People will stop answering your requests if you never answer theirs. Similarly, one could imagine that the intelligent Web would be based on the simple condition that you can use it only if you provide some knowledge in return.

In the end the different brains of users may become so strongly integrated with the Web that the Web would literally become a "brain of brains": a super-brain. Thoughts would run from one user via the Web to another user, from there back to the Web, and so on. Thus, billions of thoughts would run in parallel over the super-brain, creating ever more knowledge in the process.

The Brain Metasystem

The creation of a super-brain is not sufficient for a metasystem transition: what we need is a higher level of control which somehow steers and coordinates the actions of the level below (i.e. thinking within the individual brains). To become a metasystem, thinking in the super-brain must not be just quantitatively, but qualitatively, different from human thinking. The continuous reorganization and improvement of the super-brain's knowledge by analysing and synthesising knowledge from individuals, and eliciting more knowledge from those individuals in order to fill gaps or inconsistencies is a metalevel process: it not only uses existing, individual knowledge but actively creates new knowledge, which is more fit for tackling different problems. This controlled development of knowledge requires a metamodel: a model of how new models are created and evolve. Such a metamodel can be based on an analysis of the building blocks of knowledge, of the mechanisms that combine and recombine building blocks to generate new knowledge systems, and of a list of values or selection criteria, which distinguish "good" or "fit" knowledge from "unfit" knowledge. (see my research project on knowledge development).

References:


Copyright© 1995 Principia Cybernetica - Referencing this page

Author
F. Heylighen,

Date
Jan 5, 1995

Home

Metasystem Transition Theory

The Future of Humanity

The Social Superorganism and its Global Brain

Up
Prev. Next
Down



Discussion

Add comment...