|Vrije Universiteit Brussel|
The Internet and the WWW protocol.
The Internet is steadfastly becoming a highly popular medium for the distribution and communication of ideas and knowledge. During the recent years the Internet has attracted more than 20 million users for a total of 1.5 million host machines, achieving growth rates of at least 1 million new users per month. At this moment, the Internet is already the largest global communication network. The Internet's structure or set-up is however most peculiar and very much unlike any other mass-medium of communication. Its typical characteristics such as a decentralised principle of operation (and control), and the possibility of fast and close interaction between 'information consumer' and 'information producer' can be contrasted to those of the other media as the publishing and film industry.
The World Wide Web (WWW) and its associated Hyper Text Transfer Protocol (HHTP) were designed to retain the positive aspects of network anarchy and at the same time provide consumers and producers of information a more friendly and efficient interface to knowledge. This was achieved by introducing a protocol that controlled both knowledge representation and user interface.
The WWW adheres to a distributed principle of knowledge representation which means knowledge is stored as a network of nodes and links. Nodes' can contain any combination of plain text, images, sounds as well as movies. Links connect individual items from a certain node to any other node, according to the author's or web master's preference and taste, hence forming a network of nodes connected via links. As for the user interface, the user is expected to retrieve information by traversing meaningful links from node-items to other nodes, thereby making associative judgements that will, from a certain start position, lead to the node containing the desired information.
The WWW is structured by human designers that use their intuitive ideas of knowledge structuring and semantics to construct their nodes and sub-networks. These individual contributions are gradually connected to the larger body of knowledge that is already contained within the WWW, thereby expanding the content of the overall network. In our view, the WWW is structured and behaves to a large extent as a learning (Mc. Clelland & Rumelhart, 1986) and self-organising network in which the network's special principle of knowledge organisation and retrieval interacts with the constant influx of new contributions. The WWW is expected to become a huge future 'encyclopaedia' of the whole of human knowledge representing the shared knowledge and semantics of all its users and contributors. (Mayer-Kress & Barczys, 1994)
Knowledge evolution on the Internet.
We believe the laws of evolution, in which natural selection guarantees the survival of the fit and the extinction of the unfit, apply in all cases whether living beings, dead matter or knowledge is concerned. (Hofstadter, 1991) (Dawkins, 1976) (Heylighen, 1995) (Skinner, 1974) Ideas, chunks of knowledge, can be considered specific entities that rely on human or other carriers to multiply, mutate, adapt and survive. The human population and the technology devoted to communication can likewise be regarded as a huge ecology populated by ideas, theories or knowledge in general. The Internet has in the most recent years been becoming an integral part of this so-called ecology of knowledge, but one very much unlike any other form of communication. Its features are most distinct from all other known forms of communication storage and communication, in terms of speed, reach (global) and degree of interaction between knowledge and its carriers.
At a more foundational level, it can also be shown that the WWW and its principle of distributed knowledge representation are highly compatible with the preconditions for the evolutionary development of knowledge. Theories of memetics or knowledge evolution presuppose distributed knowledge coding. because this coding format alone allows information to partially mutate and adapt. (Klimesch, 1994) Another important assumption within most theories of knowledge evolution is that knowledge can not be considered merely the passive imprint of reality on a certain carrier. Knowledge is constructed and constructs itself in a top-down-bottom-up fashion by controlling its carrier's perception of reality. (Lindsay & Norman, 1977) The WWW does indeed allow a strong interaction between the knowledge the network contains and its constructors.
The WWW construction enables all three preconditions for the evolutionary development of knowledge to an extent not present in other media of communication, but these conditions are however only enabled but not actively supported. They are considered side effects within a system that is solely designed for efficient communication. Another urgent problem is the WWW's complete dependence on human designers. The limited capacity and oversight of human designers using intuitive semantics hampers the optimisation of knowledge structure and content.
Our experiment with a self-organising hypertext network.
It is our view that the Internet can only continue to function successfully if it is equipped with a number of semi-intelligent tools that actively support the evolutionary development of knowledge on the Internet. We therefore set up an experimental network that (without the intervention of human designers) could improve its own structure through a number of simple, locally operating learning rules.
A HyperCard application was used to implement a self-organising network of the 150 most frequent English nouns that was made accessible to the population of Internet users. These English nouns were derived from the LOB-corpus. (Johansson & Hofland, 1989) Associated with each link in the network was a unique measure of its strength which was set at a small random value at the beginning of our learning trials, so that initially each word was more or less connected to each other word in the network. During the experiments our learning rules and mechanisms would operate on these connection strengths thereby hopefully changing the initial state of the network to a semantically more meaningful structure.
Any evolutionary learning scheme for an adaptive hypertext network such as the WWW requires a measure of node or link fitness. This measure should be locally measurable (no central control) and simple (limitations in bandwidth and real-time constraints). We therefore chose to let our learning algorithms operate on link frequency which is uni-dimensional and can be measured locally.
The first learning rule, Frequency, implemented selection by simply reinforcing used links. The strength of each connection between a certain node A and B was strengthened with a certain (small) value on each occasion it was being used. Frequency implemented selection by strengthening fit nodes relative to unfit nodes.
The second learning rule, transitivity applied transitive reasoning to the path each individual browser followed through the network. For each connection between a certain node A and B and a following connection between B and C, transitivity increased the connection strength of the link between the nodes A and C with a certain (small) amount. Transivity thus creates new connections and maintains variety.
The third learning rule, symmetry, strengthens each link between the nodes B and A with a small value for each occasion that the connection between the nodes A and B is used. Symmetry was implemented to make the network's structure less unidirectional, improve the clustering of related nouns, interact with the other learning rules by increasing variety.
The experiment's participants were requested to browse the network of nouns for as long as they wanted. They received a WWW page containing the subject's random initial position in the network and a list of 10 nouns from which they were allowed to choose the word they thought to be most strongly connected or associated with their position in the network. Their position then switched to the word they had just chosen and so on.
The list of nouns was ordered according to descending connection strength, thus implementing an actual threshold in the network: only those nodes sufficiently connected to be ordered within the first 10 were offered to the experiment's subjects. Our subjects were not informed of this ordering to avoid a number of annoying feedback effects.
The possibility to request the following 10 words from the ordered list was implemented, which theoretically enabled subjects to view all 149 connecting nodes before finally making a choice in favour of a certain word.
Network development was fast and efficient. After only 2000 jumps, indicating an estimated number of 200 participants, the network's development slowed down and settled into a more or less stable state in which most nodes were meaningfully connected to their semantically related counterparts. This extremely fast network evolution could be attributed to a controlled feedback effect: the choice list that was being offered to the experiments participants was an ordered one. Possible links to other nodes were ordered in descending strength of connection. The nodes most likely to be chosen were those nodes that were already on top of that list. This caused a strong feedback effect: once a certain connection was introduced by chance (random initial state of the network), human selection from the choice list or by the transitivity and symmetry learning rule it soon trickled up to its 'rightful' position in the ordered list.
Table 1 illustrates the development of links to the word 'MIND'. Gradually connections are introduced by transitivity or symmetry and reinforced by frequency until they reach their 'best' position in the list of connections.
Cluster analysis revealed that the network orders itself into a number of separable clusters in which related words are grouped together, such as "mind", "love", "authority", "space" and "time". The stability of these clusters over several separate experiments with different random initial states showed that positive feedback effects did not imply an overly large sensitivity to initial conditions. The network's structure was stabily and reliably representing the users' shared semantics. Clusters obtained from different experiments showed an average overlap of 76.7% overlap.
|MIND||state 0||state 3||state6||state 12||state 21|
A number of questions need to be answered in further experiments. Do hypertext networks that represent the commonal semantical/associative structure of its users facilitate manual data retrieval? A number of experiments in which retrieval times were compared for self-organised and designed networks seem to indicate that this is indeed the case, but no conclusive data has yet obtained.(Jonassen, 1989)
Another urgent problem is that of network validity: although our algorithms deliver reliable results in terms of network stability, we are not yet sure whether our networks are actually stabily representing something 'real'. We are at present attempting to cross-validate parts of the network with parallel measurements of word association (de Groot & de Bil, 1987) but this turns out to be a labour some enterprise.
We believe we have been able to demonstrate that hypertext networks can be equipped with locally operating algorithms that can make networks self-organise their structure and content. These networks adapt to their users and represent their shared knowledge without being dependent on the subjective judgement and efforts of human designers.
There are a large number of applications for these kind of network structuring techniques. They could be used to optimise large information networks such as the WWW, map and represent knowledge shared by a large number of experts, provide a standard for the identification of ideas and knowledge, etc. Further research will mostly concentrate on a validation of our results and further refinements to our set-up and learning schemes.