## Henry L. Roediger, Yadin Dudai, and Susan M. Fitzpatrick

Print publication date: 2007

Print ISBN-13: 9780195310443

Published to Oxford Scholarship Online: May 2009

DOI: 10.1093/acprof:oso/9780195310443.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 16 January 2019

# Coding and representation

Chapter:
(p.53) Part 3 Coding and representation
Source:
Science of Memory: Concepts
Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780195310443.003.0004

# Abstract and Keywords

This part presents four chapters on the concept of coding and representation. The first chapter focuses on the online coding and representation of information by means of neuronal activity. The second argues that the ability of the brain to segregate and integrate information, to make use of population and predictive coding, makes for a system that is specialized for memory. The third discusses the concept of memory trace. The fourth chapter presents a synthesis of the chapters in this part.

While entitled to separate treatments, considering coding and representation together, as is done in this volume, makes perfect sense, particularly since, in this case, the medium might indeed be the message. The question of how information is coded and represented in brain and cognition is considered by many as the most crucial problem in the neurosciences. It is certainly a necessary condition for understanding how experience alters our knowledge, or, in other words, how memory is implemented in the biological world at multiple levels of analysis. In formal language, code refers to the expression of one language in another, whereas representation is an activated vector in neuronal coding space, or a map of event space in cognitive/neuronal space, or a mentally/neuronally encoded model of the world that could potentially guide behavior. Discussion of coding and representation raises several fundamental issues that the coding and representational systems have to solve in order to be useful, such as mapping (how elements in one coding system are translated unequivocally into another coding system) and parsing (how distinctiveness of propositions is maintained). In the world of biological memory systems, additional questions surface, among them, are there multiple neuronal codes and why; how are such codes implemented in the neural hardware; and whether specific codes have evolved for the purpose of specific representations of cognitive goals, such as encoding and retrieval of procedural versus episodic knowledge.

Y.D. (p.54)

# (p.55) 10 Coding and representation: Time, space, history and beyond

The concept of coding refers to the form in which a message is conveyed from an information source to a receiver, i.e. to the way it is represented in an arbitrary intermediate medium. Information theory describes, in the abstract language of mathematics, the transmission and storage of messages, e.g. in terms of the probabilities of different symbols or combinations of symbols. It is neutral with respect to the physical nature of the symbols themselves, be they letters written on parchment, phonemes carried by sound waves or the emission times of neuronal action potentials. The science of memory appropriates the full mathematical apparatus of information theory, but uses it in the framework of its own quest for the neurobiological mechanisms that allow, in the brain, the transmission and storage of messages.

The term neural codes also subliminally refers to the clever tricks devised by biological evolution to endow organic matter with the ability to convey messages; further implying, as in the expression ‘cracking the neural code’ that Mother Nature, though intelligent, can always be outsmarted by brilliant researchers. Most investigators would not give a penny for the belief in intelligent design that the emphasis on nature’s tricks may evoke, but appreciate such a connotation of mystery and discovery, and are happy to use it in their grant proposals. The term representation often carries, instead, a slight cognitive connotation, emphasizing what occurs inside the brain, in contrast to a purely ‘black-box’ behaviorist perspective. Such overtones will not be given further attention here: coding and representation will be considered as fully equivalent concepts.

As a message is conveyed from a source to a receiver, it can pass through the intermediate medium in a flash, or it can indulge in it for any length of time. The distinction between information transmission and storage is therefore fuzzy, from a purely mathematical point of view. The concepts of coding and representation apply to both. It is nevertheless useful, in the context of memory studies, to distinguish between reasonably stable information storage, (p.56) realized for example in changes of synaptic efficacy, and information coded ‘on-line’ in neuronal activity, in close temporal proximity to the activity of the information source, be the latter external, e.g. a sensory stimulus, or internal, e.g. an item recollected from autobiographical memory. Neural activity can express somewhat more transient information storage, as in the selective patterns of activation maintained in working memory in the frontal lobes, so the distinction refers more to the substrate—the activity of neurons versus the efficacy of synapses, to be concrete—than to whether there is, strictly speaking, a memory component in the process under consideration. In this volume, ample attention is devoted to the forms in which information may be coded when it is stored in a stable manner, so this contribution will focus on its on-line coding and representation by means of neuronal activity.

# r(t)

The simplest way in which a group of cells, a so-called ‘population’ of neurons, can convey information in their activity is by each and every one transmitting the same message. This, to a good approximation, is thought to be the case for the relatively small clusters of midbrain neurons releasing neuromodulators such as dopamine, serotonin or norepinephrine. Dopamine-releasing cells in the ventral tegmental area, for example, whose axonal projections innervate vast areas of the frontal lobes, the amygdala, nucleus accumbens and other ‘limbic’ structures, are thought to be all sending the same signal. The impulses of individual cells may occur at different times, but, given the relatively slow time course of their effect, the difference is irrelevant: it is only the average rate of release of dopamine from VTA (ventral tegmental area) cells, or their firing rate r(t), that matters, a simple function of time. A similar simplicity applies to dopamine-releasing cells from the substantia nigra (pars compacta), which project to the striatum. The content of this signal has been interpreted, particularly in light of experiments in monkeys (Schultz et al. 1997), as related to reward expectation. Other interpretations have been advanced for the content of the messages conveyed by acetylcholine or norepinephrine release (Doya 2002; Yu and Dayan 2005). In each of these instances, the individual identity of the releasing neurons disappears in their mass action, and the single symbol used in such simple chemical codes can be taken to be just the average firing rate of neurons in the corresponding population.

# r(x,t)

The use of the chemical diversity of neurotransmitters and their receptors is phylogenetically ancient, as testified by the evolutionary trees (cladograms) of their genetic codes. The next neural coding principle adopted by evolution (p.57) seems to be the use of the spatial location of chemically (and electrically) identical neural elements to span a diversity of symbols. This more advanced principle operates in many peripheral sensory and motor systems across species. In the vertebrate retina, for example, identical ganglion cells convey different messages with their action potentials, as a function of their location on the retinal array. To the extent that, for a given cell, the exact timing of individual impulse can be neglected, in favor of a description in terms of short-time averaged firing rates (averaged, for example, over 10 ms or so), the activity of ganglion cells can be summarily described by a function r(x, t) which depends solely on time and on space (the spatial location of each ganglion cell).

This type of coding, like chemical coding, does not in itself depend on any learning and memory process, and it can be hardwired or genetically programmed in the system. Spatial processing can be quite sophisticated, as in the dendro-dendritic subtractions hypothesized to take place in the fly lobula plate (Haag and Borst 2002). Mathematically, one can discuss to what extent spatially dependent codes are optimized to remove redundancies in the sensory world, as exemplified by the elegant ‘ecological theory’ of early visual processing (Atick 1992). Spatial codes (like chemical codes) are not memory codes per se, but they may be used to instruct the formation of memory representations in other cells, for example in the cortical cells that, directly or indirectly, receive the signals produced by the peripheral sensory systems.

# ri(x,t)

Memory-dependent representations are those in which the meaning of the action potentials emitted by a particular neuron is a function of the activation history of that neuron, synthetically denoted by the index i attached to the rate ri(x, t). In other words, the signal a cell conveys reflects its own experience of the world, encoded in long-term changes in the efficacies of the specific synapses which contribute to activate that cell. In this way, the diversity of symbols can grow enormously and match the number of neurons available in the population, even if their connectivity is random, or metrically organized but with insufficient spatial resolution to support very precise spatial codes. Memory-based codes cannot be hardwired, and they have to be established by learning and memory processes. They are the foundation of cortical processing mechanisms, both when embedded in clear topographic maps (Rolls 1992) and when topography is not evident (Redish et al. 2001).

An influential perspective on their utilization was produced by combining the general notions of associative plasticity and of cell assemblies (Hebb 1949) with the more detailed codon theory of David Marr (1971). The Hebb–Marr perspective has largely driven the development of the technology for large-scale (p.58) multiple single-unit recording experiments (Wilson and McNaughton 1993) given that memory-based codes require each neuron to be listened to individually. Unfortunately, the very principle of memory-based diversity in the messages carried by individual neurons in the same population poses an inviolable bound on the insight that can ever be obtained with imaging techniques, such as functional magnetic resonance imaging (fMRI). The BOLD (blood oxygenation level-dependent) signal of fMRI reflects the average activity of many synapses (and of other neural components) and, independently of its poor temporal resolution, it cannot access the specific signals expressed by the activation of single cells. Still, imaging provides useful results on the gross aspects of cortical codes, similar in nature to the sociological analyses that can be derived from recording, in a large city, the average levels of noise produced, at every phase in the day, in each of its neighborhoods.

# σ({ri})

The discussion so far has considered only explicitly neural representations, expressed in the activity of neurons. The science of memory includes, however, extremely useful approaches, which cannot yet be reduced to the neural level. Thus, in developmental psychology, it is known, for example, that inverted faces are not represented as belonging to the same category as other faces. Or, in studies of bilingualism, one can discuss, for example, the acquisition of the Japanese subject–object–verb word order by speakers of English. In general, one deals with internal representations (of behaviorally relevant objects, of a syntactic structure), which can be provisionally labeled by appropriately defined symbols σ, even though their relationship to the underlying neural activity variables {ri} is yet to be determined. The grand goal of elucidating this relationship is a fascinating challenge for cognitive neuroscience, and for the science of memory.

# (p.59) 11 Coding and representation: The importance of mesoscale dynamics

The study of human memory is at the edge of a new frontier thanks largely to functional neuroimaging. Access to neurobiology can provide a critical link between psychological theories of memory and the concomitant physiology. Despite the promise, there remains a wide gap between our understanding of nervous system function and memory.

One reason for the lack of congruence is that the information processing in the brain is not explicitly considered when a psychological theory is constructed or tested. Information processing capabilities of the brain arise from brain anatomy and physiology. The anatomy endows an immense capacity for both information segregation and integration. The physiological attribute of response plasticity, where neural responses change as a function of stimulus intensity, significance and internal state, modifies the information as it is passed to different levels of the system. The distribution of information in the widely distributed neural circuits of the brain allows many parts of the brain to contribute to memory in its broadest psychological form. It is thus possible that many processes of coding of neural information will impact on memory. Memory is not the domain of particular systems in the brain, but of the brain as a whole. Memory is what the brain does.

# Anatomical features

Neurons are linked to one another both locally and at a distance. The nervous system appears to be specialized for rapid transfer of signals. This means that a single change to the system is conveyed to several parts of the brain and that some of this will feed back onto the initial site. There are obvious extremes to just how connected a system can be, and the nervous system occupies some intermediate position. Local cell networks are highly interconnected, but not completely so, and this means that adjacent cells can have both common and (p.60) unique connections. The term semiconnected was proposed to designate this particular property of local cell networks (McIntosh 2000). The networks themselves can be thought of as semiconnectors, especially in as much as their function, as discussed below, is not only to mediate the signal between different cerebral regions but also to modulate the signal, in keeping with the specific properties of different ‘semiconnectors’.

The connections between local neural ensembles are sparser than the intra-ensemble connectivity. Estimates of the connections in the primate cortical visual system have suggested that somewhere between 30 and 40 per cent of all possible connections between cortical areas exist (Felleman and Van Essen 1991). Simulation studies show that this sparseness is a computational advantage in that it allows for a high degree of flexibility in responses at the system level even when the responses of individual units are fixed (Tononi et al. 1992). Additional analyses of the anatomical connections of large-scale networks in the mammalian cerebral cortex have demonstrated a number of distinct topological features that probably lead to systems with maximal capacity for both information segregation and integration (Sporns and Kotter 2004; Sporns and Zwi 2004).

Neural network theories of brain function have emphasized these two basic organizational principles: segregation and integration. At each level in a functional system, there is a segregation of information processing such that units within that level will tend to respond more to a particular aspect of a stimulus (e.g. movement or color). At the same time, the information is concurrently exchanged with other units that are anatomically connected, allowing first for units to affect one another and secondly for the information processed within separate units to be integrated.

It seems rather obvious but still worth stating that regions can only process information that they receive. Cells in the medulla tend not to respond to visual stimuli because they are not connected to visual structures. Anatomy determines whether a given ensemble is capable of contributing to a process.

# Functional features

Long-term neural plasticity is detected in the brain following events that are not commonly considered as related to memory, such as neural damage. However, there is a type of neural plasticity that is more short lived and considered highly relevant to memory formation. Neurons can show a rapid shift in response to afferent stimulation that is dependent on the behavioral context in which they fire. This transient response plasticity has consistently been shown in the earliest parts of the nervous system, from single cells in isolated spinal cord preparations to primary sensory and motor structures. (p.61) The changes can occur within a few stimulus presentations and are probably a ubiquitous property of the central nervous system (Wolpaw 1997). Thus, one of the most important features rudimentary to cognitive operations, namely transient plasticity, can be observed in many parts of the brain.

# Population and predictive coding

One consequence of the structural and functional properties of the brain is that adjacent neurons have similar response properties (e.g. orientation columns in primary visual cortex), whereas neurons slightly removed may possess overlapping, but not identical, response characteristics. These broad tuning curves are characteristic for most sensory system cells and for cells in motor cortices. The broad tuning curves result from anatomy, where cells share some similar and some unique connections. Interestingly, anatomy also ensures that response plasticity also has a graded distribution. This has important implications for the representational aspects of the brain. Rather than having each neuron code sharply for a single feature, the distributed response takes advantage of a division of labor across many neurons, enabling representations to come from the aggregate of neuronal ensembles.

Electrophysiological studies in motor and sensory cortices have provided some examples of aggregate operations achieved through population coding (Georgopoulos et al. 1986; Young and Yamane 1992). A different twist on identifying the biology of memory comes from looking at the capabilities of a single neuron or small group. Some of the basic ingredients necessary for cognitive processes such as attention, learning and memory are contained in all nervous tissue. Population coding is probably used for higher order cognitive functions, but on a larger scale than for sensory or motor functions. When neural populations interact with one another, these rudimentary functions combine to form an aggregate that represents cognitive processes. Cognitive operations are not localized in an area or network of regions, but rather emerge from dynamic network interactions that depend on the processing demands for a particular operation.

The anatomy and physiology of the brain produce a system with complex dynamics that are seldom considered in memory theory. However, the dynamics enabled by the anatomy and physiology of the brain are likely to be the most important consideration in the memory–brain link. The dynamics enable maximum use of brain tissue such that the same set of neurons can contribute to different mental operations depending on the status of other neuronal groups (McIntosh 2004). This ‘neural context’ is the first step in a change in thinking on how the brain underlies memory. Because of neural context, the same brain region can contribute to basic sensory processing and to conscious (p.62) recollection depending on the pattern of interactions with its neighbors. Critical for the contextual dependency are the brain dynamics, where there are continuous changes in both the ensembles that are interacting and the behavior manifested.

It is important to realize that most brain dynamics are initiated from internal operations in the brain. Very little of the brain is actually exposed to the external world, meaning that the accuracy of our representation of the world must come from some model. This dilemma is encapsulated in the perspective of predictive coding (an idea that dates back to Helmholtz), and is the focus of much development in computer science and theoretical neurobiology (Dayan et al. 1995; Hinton and Dayan 1996). The basic idea is that the brain generates a model of the world from the sparse incoming information based on its own experience. A simple example of predictive coding in action is behavioral studies of change blindness, wherein subtle changes to a complex visual scene may not be recognized by the subject. Given the limited amount of information that can be transmitted by peripheral sense organs, the brain seems quickly to ‘fill-in’ a scene, freeing the input channels to process the most relevant information. This filling-in procedure is based on experience, a stored representation and a memory.

To restate, an important implication of the neural properties discussed here is that all processes in the brain have the capacity to lead to or modify memory. The brain seems particularly well adapted to encode and store information at all levels. It is tempting to say that memory in its highest form (e.g. episodic) is a derivative of the standard operations of the brain. While psychological theories tend to differentiate memory from nonmemory operations such as sensation and action, it is important to recognize that sensation, perception, attention and memory are intimately intertwined. The very acts of seeing, hearing and acting make use of the brain’s capacity for memory. This is probably why, in typical circumstances, what forms the contents of our memory is not under intentional control. Indeed, those processes that facilitate memory (e.g. levels of processing) are essentially by-products of other cognitive operations.

# Conclusion

I have made the argument that the ability of the brain to segregate and integrate information, to make use of population and predictive coding, makes for a system that is specialized for memory. The same mechanisms producing the large-scale dynamics that perceive a face, or hear a siren or symphony, are also those that give rise to memory. Such a general mechanism produces a formidable challenge for memory theory.

(p.63) The difficult question for the science of memory is still how these neural codes and representations relate to the mental codes and representations that cognitive psychology has defined. To some it may seem somewhat dissatisfying simply to consider that the pattern of brain activity is the physical embodiment of cognition, but that, to this point, is what most data suggest. This could be taken to imply that if we were able to characterize all possible configurations of neural system interactions, and relate them to the cognitive operations they enable, we would be in a position to understand the brain–mind link.

Even if we could generate such a characterization, I doubt we would be closer to understanding as we do not have a good framework to link these two levels. It may be that our understanding of the brain and of the mind is not satisfactory to make this link—a likely scenario. The science of memory will require a substantial revision in the base concepts to characterize the brain–mind link. This is difficult to imagine, but consider the revolution in physics with the move from Aristotle to Newton to Einstein. Each of these changes led to entirely new concepts and new worldviews. For the science of memory, such a change will come from a more focused attempt to merge what we know about the brain with what we know about the psychological phenomena. It is certainly the case that studies of the neural basis of memory and psychological studies can proceed independently for some time. However, the conceptual change needed for the science of memory will require the unification of both research fields.

# Acknowledgments

A.R.M. is supported by the Canadian Institutes for Health, Natural Sciences and Engineering Research Council and the J.S. McDonnell Foundation. (p.64)

# (p.65) 12 Coding and representation: Searching for a home in the brain

The idea that mental experience may leave residue in the soul, or mind, that allows later remembering of the experience is as old as recorded history, and probably older. The idea that this residue is physical, somewhere in the brain, is more recent, having been first proposed by Robert Hooke (1627–1703) who thought that memory is ‘as much an Organ as the Eye, Ear, or the Nose’, and that it has ‘its situation somewhere near the Place where Nerves from the other Senses concur and meet’ (Young 1965, p. 287).

The existence of this ‘residue’ with a remarkable staying power is now taken for granted, but much about it has remained baffling. How is it formed? What is its nature? What kind of thing, or entity or stuff is it? What is the relationship between the experience and its residue? What is the relationship between the residue and remembering that it enables? Where does the residue reside? Does every experience leave a residue? If not, then what determines which ones do and which ones do not? If yes, what kind of a place is it that can ‘hold’ an individual’s untold experiences? Does the residue last forever? (Not many scientists believe this, but I think that some do.) Does it last at least as long as the individual is alive? These and related questions have been raised and debated, sometimes hotly debated, throughout the human intellectual history. At the present time, no one knows what the answers are, although we have undoubtedly made progress in getting a better grip on the questions.

The terms that have been used to refer to the memorially relevant components of the after-effects of experience have varied with the fashions of the times, the accumulated pertinent knowledge already available, and even the languages and dialects spoken by those who have thought deeply about the matter. A frequently used term is ‘representation’, another is ‘coding’—as in the title of this section of the book. Other well-known terms are ‘engram’, ‘memory image’ and ‘memory trace’. Each has its own connotations that vary from context to context and even from writer to writer, although the concept lying behind all these terms has been and continues to be relatively unambiguous.

# (p.66) Memory trace

A memory trace is the neural change that accompanies a mental experience at one time (time 1) whose retention, modified or otherwise, allows the individual later (at time 2) to have mental experiences of the kind that would not have been possible in the absence of the trace.

The critical ingredients of this definition are: (1) mental experience at time 1; (2) neural change; (3) retention of aspects of the change; (4) mental experience at time 2; and (5) the relationship between experiences at time 1 and time 2. The concept of memory trace ties together these features in an organized whole. Every single component is critical in the sense that its absence would be tantamount to the absence of the whole.

Some features of the definition may be worth emphasis, in order to minimize misunderstanding. First, the definition applies to cognitive memory, the kind of memory that has to do with mental experience. It has nothing to say about learning and memory in which mental experience of the kind that the definition refers to is missing. Thus, much of what has been written in the traditional literature on skill learning, conditioning, priming and simple forms of associative learning seems to have done perfectly well without invoking representation-like concepts of the mental type to which I refer here. All kinds of mental concepts, of course, were anathema to many psychologists during the heyday of behaviorism.

Secondly, the definition implies that not all but only some of the physiological/neural after-effects of an experience constitute the memorially significant ‘residue’, i.e. the memory trace. The question of which ones, and the whole issue of how to separate the wheat of the memory trace from the chaff of all sorts of neural activity that has nothing to do with memory, remains a challenging problem for the neurobiological side of the science of memory. The definition also assumes that the memory trace is a dynamic, changing, malleable entity (Dudai 2002; Nader 2003) rather than a ‘fixed, lifeless’ sort of thing that many cognitive psychologists came to look down upon, thanks to Sir Frederick Bartlett’s well-known disdainful phrase.

Thirdly, the definition explicitly distinguishes between, and relates to each other, two kinds of entities: physical (neural change) and mental (experience), and thereby brushes on one of the central issues of our science: how does the mental arise out of the neural? Talking about mental experiences as separable from neuronal processes may be questioned by those who think that because mental experience depends on neuronal processes it must also be in some sense reducible to neuronal processes. Aside from the problem created by many meanings of the concept of reduction, the logic of this type of argument has always escaped me. At least for practical purposes—to get on with the (p.67) ‘business’—I find it more congenial to operate in a conceptual framework in which the mind depends on the brain but also has properties and capabilities that are different from those of the brain.

Fourthly, the definition reminds us that the memory trace is not just mere residue, or after-effect of a past experience, not just an incomplete record of what was. It is also a recipe, or a prescription, for the future. However, as it is usually only an impoverished record, it is also only an unreliable guide to what will happen in the future. What actually happens—what kind of a future experience it enables—depends not only on its properties at the time of attempted retrieval but also on the conditions prevailing at the time of retrieval. This was one of the deep insights of Richard Semon, the early and unappreciated prophet of memory, an insight that the subsequent experimental work by others has more than vindicated (Schacter 2001a).

# Brain and mind

Fifthly, the definition of memory trace is given in terms of something happening in a physical object, the brain, and in that sense it is tempting to think of the trace also as a physical object, or physical entity. It is not. A definition of memory trace in terms of physical changes in the brain does not make the memory trace a physical entity itself. Memory trace is a change, and change is not an entity. It is a relationship (difference) between two things that are physical objects, the brain at time 1 (‘immediately before’ the experience) and time 2 (‘immediately after’ the experience), but the relationship itself is not a physical object. To illustrate: think of drawing a straight line. After you have drawn it, the line exists physically with all its properties. Then you grab the pencil again and make the same line a bit longer. After you have done it, the ‘second’ line exists physically with all its properties. The difference between the two does not exist anywhere other than in your mind. You can of course arrange for a comparison between them, by making a copy of the first line while it exists, and you can note their differences in length. However, the difference you note exists only in your mind, in physical reality there exist only two lines.

Sixthly, because there is no object or entity in the brain that can be said to be a particular memory trace (i.e. to represent a particular experience), it is in principle not possible to observe it as such, to identify it as such or to determine its properties as such. Memory trace, as defined, is something that makes something else possible, if and only if some other conditions are fulfilled at time 2. Since time 2 has not yet arrived, the relationship between the memory trace and the experience it yields is indeterminate. The situation is not unlike that of the relationship between an elementary particle’s location and its momentum as described by Heisenberg in the 1920s.

(p.68) Seventhly, the definition relates ‘mental experience’ to a physical happening (neural change). It is important to note that both experience and neural activity are part of the reality with which cognitive neuroscience deals. However, there is also a difference between them, one that many people get excited about. The difference is that the experience is real (only) from the first-person’s point of view, whereas the neural events are real, at least in principle and increasingly so in actual scientific practice, from the third-person point of view. The ‘subjectivity’ of the first-person experience has traditionally been held as an obstacle to the study of such experience. That is not a problem, however. The only ‘problem’ is that the first-person experiences are not directly observable. Yet, like countless other things in the universe that scientists study, they are indirectly observable. In psychology, indirect observation has been successfully applied since day one. The most respectable and oldest part of experimental psychology, psychophysics, is all about indirect observation of ‘subjective’ experiences, and so is much about cognitive study of memory. The important criteria are not ‘objectivity’ and ‘subjectivity’. The important criteria are the possibility of empirical validation and rejection of hypotheses, reliability and replicability of the empirical findings, and the coherence of the story (theory) that relates facts to one another in a way that would not be possible without science.

Eighthly, the definition reflects the basic assumption that memory trace, like any other concept, can be fully understood only in relation to other concepts. Here, the other concepts are experiencing something at time 1 (dealt with under the concept of Encoding in Section 6 of this volume), and experiencing something at time 2 (dealt with under the concepts of Retrieval and Remembering in Sections 10 and 11, respectively, of this volume).

# Epilogue

In Elements of Episodic Memory (Tulving 1983) I talked about memory traces synonymously with engrams. I defined engram as the state of the memory system before and after the encoding of the event, as well by its position and relationship to other hypothetical concepts in GAPS, the general abstract processing system, which I proposed as the conceptual framework for studying episodic memory. The conceptualization of memory trace offered here is not greatly different from that of almost a quarter century ago. The main difference has to with the ‘locality’ of engrams. Then they resided in the ‘memory system’, now they have found a home in the brain. I think this is progress.

# (p.69) 13Integrative comments Coding and representation: On appealing beliefs and paucity of data

Since the concept of memory representations is closely related to that of information, it is tempting to think of it in terms of the formal information theory developed by Shannon (1948). This theory considers the transmission of messages from sender to receiver via information channels. As a first step, the messages have to be encoded to acquire the form (representation) that can be transmitted via the channel (e.g. the famous Morse code with English characters encoded in sequences of dots and dashes for subsequent transmission by current pulses via electric wires) and ultimately restored by the receiver to their original meaning. The theory only deals with quantitative aspects of information, such as the minimal number of binary elements needed to faithfully represent a message, and not with the ‘content’ of the messages and their broader context. The mathematical apparatus of information theory allows one to analyze the optimal encoding that maximizes the speed of the transmission, and/or minimizes the errors resulting from channel noise.

In order to relate these ideas to biological memory, we have to define who is a sender and who is a receiver, what the messages in question are and how they are represented in the brain. The answer to the first question seems to be pretty straightforward—both the sender and the receiver is the organism or, more precisely, the brain. However, the receipt of the message, called in this context ‘retrieval’ (see Section 10 of this volume), is delayed in time for a period that can last from seconds to years. As pointed out in the contribution by Alessandro Treves (Chapter 10), this fact by itself does not fundamentally change the mathematical aspects of coding, even though one could argue that the required life span of the particular memory process plays a crucial role in selecting the corresponding representation (e.g. neural activity for short-term versus synaptic efficacies for long-term memory). Moreover, the long-lasting (p.70) memory representations could undergo various transformations due to interactions with other information sources, as will be discussed in more detail below.

A more difficult question is what the messages are, in other words, what is it precisely that goes into the memory system of the brain and what is being retrieved from it. It appears to me that the answer to this seemingly innocent question strongly depends on the scientific tradition within which it is addressed. In behavioral sciences, it is the correct behavior in a certain situation that has to be acquired and eventually remembered. This approach, while being most consistent and solid, may appear, however, to be too mechanistic in light of all the variety and complexity of memory that is currently accessible to experimental studies (see Chapter 2, this volume, for a more elaborated discussion of this point). I would like here to consider briefly the suggestion, given within the tradition of cognitive sciences and expressed in Chapter 12 by Endel Tulving, that the ‘messages’ that are to be remembered are the ‘mental experiences’. In other words, when the mental experience occurs, it results in a certain set of neuronal changes in the brain (serving as its representation) that can be retained until the time when the mental experience will be re-created, in full or in part.

Some of these representations could themselves depend on the previously acquired memories, as discussed in Chapter 11 by Anthony McIntosh. They could also come into interaction with other external and internal sources of information that is constantly being processed in the brain during the life span of the memory. This interaction among different representations formed over the life of the organism is an extremely important feature of memory, allowing the information to be represented and used in a way that is shaped by context, understood in the broadest possible meaning. As a result, information that is being retrieved always carries with it a train of acquired associations that, when accumulated, may make the re-created mental experience significantly different from the original one.

The aforementioned notion carries a lot of appeal, as it integrates in an attractive and natural way our introspective experiences—mental experiences, studied by cognitive psychologists—with the scientific knowledge acquired by generations of brain researchers about neuronal changes, or processes. Shall we thus conclude that this definition exhausts the issue of memory representations? As mentioned by Tulving, it leaves behind the kinds of learning and memory that do not necessarily involve explicit mental components, such as skill learning.

More fundamentally, however, this seemingly flawless construction may cause some unease in the adherers to yet another scientific tradition of (p.71) empirical neuroscience. An implicit basic assumption in this tradition is that the functioning of the brain can be completely reduced to neuronal processes occurring in it, that encompass everything else (mental experience, emotions, self-awareness, etc.). In the framework of the discussion of coding and representations, one should assume thus that ‘mental experiences’ that Endel Tulving talks about are themselves ‘represented’ by some neuronal processes (e.g. spatiotemporal activity patterns in certain neuronal populations).

The picture that emerges is thus that of a chain of interacting neuronal transformations and encodings, resulting eventually in representations that can last for a life span of the memory process in question, albeit undergoing continuing context-dependent modifications as discussed above. Some of the initial representations in this chain are discussed in more details in Chapter 10 of this volume.

For example, think of an object, reflecting a light that hits the retina, that in turn sends electric impulses via the optic nerve to the brain, where after some amount of processing a certain activity pattern is emerging, that may lead to the emergence of the mental experience of ‘seeing’ this object. This activity pattern can itself cause other types of neuronal processes (e.g. modifications in neuronal connections) that can be retained for a period of time, and subsequently lead to the re-creation of this activity pattern or its modified version, that again may or may not lead to related mental experience. If other objects that are associated with the original one are observed, their representations can interact with each other, resulting in significant modifications in the activity patterns being re-created when the objects are viewed. For example, it can be reasonably assumed that the view of a Chinese character evokes vastly different activation patterns in the brain of a student of Chinese when seen for the first time and after years of studies. In other words, the neural representations should not be considered in separation, but as an interacting system that carries information about single memories, their context and history.

In this emerging framework, the concept of mental representations, even if useful for characterizing certain types of neuronal processes and relating them to particular cognitive phenomena, is not strictly speaking necessary for defining all the aspects of coding and representation in memory.

For the sake of intellectual clarity, I would like to argue that the above picture, even if implicitly shared by most neuroscientists, is largely based on common beliefs but not yet strongly supported by solid scientific knowledge, and may not even be free of some fundamental problems. First, despite great efforts, the relationship between neural processes and mental experiences is barely established. The strongest evidence in the primate of neural-over-mental comes from the experiments on sensory perception, where stimulation of certain brain areas was (p.72) shown to bias perceptual decisions of monkeys (Cohen and Newsome 2004). In humans, we can cite largely anecdotal evidence, coming from patients undergoing neural surgery, that stimulating certain brain areas dramatically leads to the sense of ‘re-living’ particular experiences from their lives (Penfield 1955). There is some evidence that points to delayed reactivation of some spatiotemporal activation patterns in the hippocampus (Wilson and McNaughton 1994), an area believed to be involved in episodic memory; however, it is not at all clear what if any role these reactivations play in memory. Recent funcional magnetic resonance imaging (fMRI) studies indicate that areas in the brain that respond to the view of various visual objects activate in a similar pattern when the subject is instructed to imagine seeing these objects (O’Craven and Kanwisher 2000)—a new promising research direction, but not directly addressing the causality relationship between mental and neuronal processes. It therefore appears to me that we are still light years away from having any reasonable characterization of the neuronal processes that could get even close in its richness to that obtained in cognitive sciences.

Moreover, from what we currently know after decades of experiments, there are myriads of ongoing neuronal changes occurring in the brain all the time, spanning the large range from the molecular via the cellular and the circuit to the brain-organ and whole-brain level. Some of these changes are probably irrelevant for representing information and are not unique to the brain, for example molecular processes that guarantee the energetic balance of the neurons. Yet others appear to be playing an important role unique for the brain, for example electric activity of the neurons, synaptic modifications and formation of new connections. Even if we succeed one day in empirically characterizing which ones of the neuronal changes are related to mental experiences and their memory representations, some as yet undiscovered novel theoretical principles will still surely be needed in order to understand the fundamental aspects of this relationship beyond statistical correlations. In the absence of these principles, the best we can now hope for is to have a ‘translation table’ between neuronal and cognitive processes (Dudai 1992), which by itself will not necessarily give us any deep understanding of the above-mentioned relationships.

We can thus see that attempts to define the notions of coding and representation as related to the science of memory, in a way that would be relevant for different scientific traditions, invariably confronts us with the most fundamental issues in science, such as the brain–mind relationship. It will be the task for the future generations of the scholars of memory to find out whether a unified definition of memory representations, that would be equally applicable to all the scientific traditions mentioned above, is possible. The history of science (p.73) provides us with some remarkable examples when concepts that were initially thought of as being independent were reduced to one another thanks to progress in the corresponding theories. Such as, for example, when development of microscopic statistical physics resulted in the simple relationship between the temperature of matter (T) and the energy of its constituting elements $( ε ) : ε = n kT 2$. Energy itself was reduced to mass by Einstein in his special relativity theory by the most famous equation in science: E = mc 2. Modern neuroscience is yet to produce equally crucial breakthroughs, but it is still much younger than physics was at the time of these amazing developments. (p.74)