Hierarchical Brain

An explanation of the human brain

First published 1st February 2024. This is version 1.5 published 2nd March 2024.
Three pages are not yet published: sleep, memory and an index.
Copyright © 2024 Email info@hierarchicalbrain.com

Warning - the conclusions of this website may be disturbing for some people without a stable mental disposition or with a religious conviction.

Model of my world

The end result of the afferent processing of data, from internal and external senses as well as from inside my brain, is a very large number of symbol schemas, each of which represents a concept, and all the connections between them. The total of these symbols and their connections forms a model of my world in my brain, which includes my body and my brain itself.

The model has an emergent architecture that is mathematically fractal and chaotic. This means that the behaviour of the brain is determinate (the future can in principle be predicted), but non-deterministic (not predictable in practice).

This concept is level 5 in my proposed hierarchical levels and the highest of the four afferent processing levels. All higher-level functions in levels 6 and 7 are dependent on this model and its contents.

Contents of this page
Overview - a brief summary of my views.
The science - a summary of how other writers have referred to a model of the world in the brain.
Details - details of my proposals.
References - references and footnotes.

Overview

The science

Details of my proposals


References For information on references, see structure of this website - references

  1. ^ The brain from inside out - Gyorgy Buzsaki 2019 Oxford University Press
    See also The brain from inside out
    doi: 10.1093/oso/9780190905385.001.0001 or see GoogleScholar.
    Page 83, second paragraph of chapter summary: “Thus the brain builds a simplified, customized model of the world by encoding the relationships of events to each other. These aspects of model building are uniquely different from brain to brain.”
  2. ^ Treatise on Physiological Optics, Volume III - Hermann von Helmholtz 1867, translated from German by James P. C. Southall 1925
    downloadable here.
    Page 23: “The idea of a single individual table which I carry in my mind is correct and exact, provided I can deduce from it correctly the precise sensations I shall have when my eye and my hand are brought into this or that definite relation with respect to the table. Any other sort of similarity between such an idea and the body about which the idea exists, I do not know how to conceive. One is the mental symbol of the other.”
  3. ^ Ibid. Treatise on Physiological Optics, Volume III
    Examples of illusions include:
    Pages 3-4 - phantom limb: “The most remarkable and astonishing cases of illusions of this sort are those in which the peripheral area of this particular portion of the skin is actually no longer in existence, as, for example, in case of a person whose leg has been amputated. For a long time after the operation the patient frequently imagines he has vivid sensations in the foot that has been severed. He feels exactly the places that ache on one toe or the other.”;
    Page 12 - a distant light: “when a distant light, for example, is taken for a near one, or vice versa. Suddenly it dawns on us what it is, and immediately, under the influence of the correct comprehension, the correct perceptual image also is developed in its full intensity. Then we are unable to revert to the previous imperfect apperception.”;
    Pages 192-193 - horizontal and vertical stripes: “There are numerous illustrations of the same effect in everyday life. An empty room looks smaller than one that is furnished; and a wall covered with a paper-pattern looks larger than one painted uniformly in one colour. Ladies frocks with cross stripes on them make the figure look taller.”;
    Pages 195-196 - Hering illusion and Zollner illusion;
    Page 283 and pages 291-2 - the moon on the horizon.
  4. ^ Ibid. Treatise on Physiological Optics, Volume III
    Page 31: “We explain the table as having existence independent of our observation, because at any moment we like, simply by assuming the proper position with respect to it, we can observe it. The essential thing in this process is just this principle of experimentation. Spontaneously and by our own power, we vary some of the conditions under which the object has been perceived. We know that the changes thus produced in the way that objects look depend solely on the movements we have executed. ... In fact we see children also experimenting with objects in this way. They turn them constantly round and round, and touch them with the hands and the mouth, doing the same things over and over again day after day with the same objects, until their forms are impressed on them; in other words, until they get the various visual and tactile impressions made by observing and feeling the same object on various sides.”
  5. ^ Perceptual illusions and brain models - Gregory 1968
    doi: 10.1098/rspb.1968.0071 downloadable here or see GoogleScholar.
    (All papers of Richard Gregory are available at Richard Gregory - papers)
    Page 6, from sixth paragraph of left-hand column: “Perception seems, then, to be a matter of 'looking up' stored information of objects, and how they behave in various situations. Such systems have great advantages. ... Systems which control their output directly from currently available input information have serious limitations. In biological terms, these would be essentially reflex systems. Some of the advantages of using input information to select stored data for controlling behaviour, in situations which are not unique to the system, are as follows:
    1. In typical situations they can achieve high performance with limited information transmission rate. It is estimated that human transmission rate is only about 15 bits/second. They gain results because perception of objects - which are redundant - requires identification of only certain key features of each object.
    2. They are essentially predictive. In typical circumstances, reaction-time is cut to zero.
    3. They can continue to function in the temporary absence of input; this increases reliability and allows trial selection of alternative inputs.
    4. They can function appropriately to object-characteristics which are not signalled directly to the sensory system. This is generally true of vision, for the image is trivial unless used to 'read' non-optical characteristics of objects.
    5. They give effective gain in signal/noise ratio, since not all aspects of the model have to be separately selected on the available data, when the model has redundancy. Provided the model is appropriate, very little input information can serve to give adequate perception and control.
    There is, however, one disadvantage of 'internal model' look-up systems, which appears inevitably when the selected stored data are out of date or otherwise inappropriate. We may with some confidence attribute perceptual illusions to selection of an inappropriate model, or to mis-scaling of the most appropriate available model.”
  6. ^ The Nature of Explanation - Kenneth Craik Cambridge University Press 1943 or see GoogleScholar.
    See also The Nature of Explanation (a review).
    In chapter 5 entitled “Hypothesis on the nature of thought”, page 51, second paragraph, to page 52: “By a model we thus mean any physical or chemical system, which has a similar relation-structure to that of the process it imitates. By 'relation-structure' I do not mean some obscure non-physical entity which attends the model, but the fact that it is a physical working model which works in the same way as the process it parallels, in the aspects under consideration at any moment. Thus, the model need not resemble the real object pictorially; Kelvin’s tide predictor, which consists of a number of pulleys on levers, does not resemble a tide in appearance, but it works in the same way in certain essential respects - it combines oscillations of various frequencies so as to produce an oscillation which closely resembles in amplitude at each moment the variation in tide level at any place.”
    Page 57, fifth paragraph: “My hypothesis then is that thought models, or parallels, reality - that its essential feature is not 'the mind', 'the self', 'sense-data' nor propositions but symbolism, and that this symbolism is largely of the same kind as that which is familiar to us in mechanical devices which aid thought and calculation.”
    Page 61, second line: “If the organism carries a 'small-scale model' of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilise the knowledge of past events in dealing with the present and future, and in every way to react in a much fuller, safer, and more competent manner to the emergencies which face it. Most of the greatest advances of modern technology have been instruments which extended the scope of our sense-organs, our brains or our limbs. Such are telescopes and microscopes, wireless, calculating machines, typewriters, motor cars, ships and aeroplanes. Is it not possible, therefore, that our brains themselves utilise comparable mechanisms to achieve the same ends and that these mechanisms can parallel phenomena in the external world as a calculating machine can parallel the development of strains in a bridge?”
  7. ^ Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness - Philip Johnson-Laird Cambridge University Press 1983
    This book has some helpful and prescient statements about the nature of mental models, and how we can understand them, but also has a lot of less useful detail on the possible methods of processing of logic and language. The general conclusion is that brains do not contain logic or language processing modules, but that they build models of the world and manipulate them to emulate the world. It proposes that there are different types of representations for language, objects in the world and images, and that many common relational concepts are innate. It is non-committal on whether meaning is in the mind or resides in the world and has no useful information on how meaning is represented.
    Page x (part of prologue), end of second paragraph to beginning of third: “...human beings construct mental models of their world... This idea is not new. Many years ago Kenneth Craik (1943) [see reference above] proposed that thinking is the manipulation of internal representations of the world.”
    Page 474, first paragraph: “Moreover, models need be neither complete nor wholly accurate to be useful; and what our limited knowledge of our own operating system gives us is a sense of self-identity, continuity, and individuality.”
    Page 4, second paragraph: “There are no complete mental models for any empirical phenomena. What must be emphasised, however, is that one does not necessarily increase the usefulness of a model by adding information to it beyond a certain level.”
  8. ^ Ibid. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness
    Page 402 under the heading “How do mental models represent the world” third paragraph and last paragraph: “You may say that you perceive the world directly, but in fact what you experience depends on a model of the world. ...In short, our view of the world is causally dependent both on the way the world is and on the way we are. There is an obvious but important corollary: all our knowledge of the world depends on our ability to construct models of it.”
  9. ^ Ibid. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness
    Page 10, under the heading “Mental models and criteria for explanation”, first and second paragraphs: “At the first level, human beings understand the world by constructing working models of it in their minds. Since these models are incomplete, they are simpler that the entities they represent. In consequence, models contain elements that are merely imitations of reality - there is no working model of how their counterparts in the world operate, but only procedures that mimic their behaviour. ... At the second level, since cognitive scientists aim to understand the human mind, they, too, must construct a working model. It happens to be of a device for constructing working models. Like other models, however, its utility is not improved by embodying more than a certain amount of knowledge. The crucial aspect of mental processes is their functional organization, and hence a theoretical model of the mind need concern only such matters.”
    Page 470, first paragraph under the heading “Self-awareness in automata that understand themselves”: “...the mind must be more complicated than any theory of it: however complex the theory, a device that invented it must be still more complex. Obviously, cognitive scientists aim to understand the mind - to have a mental model of a device that makes mental models. There is a striking similarity between this goal and the achievement of self-awareness: the mind is aware of the mind. It understands itself at least to some extent, and it understands that it understands itself.”
  10. ^ Every good regulator of a system must be a model of that system - Conant and Ashby 1970
    doi: 10.1080/00207727008920220 downloadable here or see GoogleScholar.
    (The downloaded version of this paper that I have has presumably been scanned and had some Optical Character Recognition software applied to it, but not completely successfully, so there are some odd misprints.)
    This paper contains a formal proof that every regulator of a complex system will automatically become a model of that system. Final paragraph under the heading “Discussion” on page 10:
    “To those who study the brain, the theorem [that has been proved in this paper] founds a 'theoretical neurology'. For centuries, the study of the brain has been guided by the idea that as the brain is the organ of thinking, whatever it does is right. But this was the view held two centuries ago about the human heart as a pump; today’s hydraulic engineers know too much about pumping to follow the heart’s method slavishly: they know what the heart ought to do, and they measure its efficiency. The developing knowledge of regulation, information processing, and control is building similar criteria for the brain. Now that we know that any regulator (if it conforms to the qualifications given) must model what it regulates, we can proceed to measure how efficiently the brain carries out this process. There can no longer be question about whether the brain models its environment: it must.”
  11. ^ The theory of constructed emotion: an active inference account of interoception and categorization - Barrett 2017
    doi: 10.1093/scan/nsw154 downloadable here or see GoogleScholar.
    Page 5, last paragraph, under the heading “How does a brain perform allostasis?”: “For a brain to effectively regulate its body in the world, it runs an internal model of that body in the world.”
    And the note relating to this sentence at the bottom of the page: “There is a well-known principle of cybernetics: anything that regulates (i.e. acts on) a system must contain an 'internal model' of that system.”
    Page 6, second paragraph: “All animals run an internal model of their world...”
    Page 11, second paragraph: “A brain implements an internal model of the world with concepts because it is metabolically efficient to do so.”
  12. ^ Do we have an internal model of the outside world? - Land 2014
    doi: 10.1098/rstb.2013.0045 downloadable here or see GoogleScholar.
    This paper says that there is evidence that the brain must contain a model of the surroundings and also parts of the body that is updated with every move we make and that this is needed for action. Start of abstract: “Our phenomenal world remains stationary in spite of movements of the eyes, head and body. In addition, we can point or turn to objects in the surroundings whether or not they are in the field of view. In this review, I argue that these two features of experience and behaviour are related. The ability to interact with objects we cannot see implies an internal memory model of the surroundings, available to the motor system. And, because we maintain this ability when we move around, the model must be updated, so that the locations of object memories change continuously to provide accurate directional information. The model thus contains an internal representation of both the surroundings and the motions of the head and body...”
  13. ^ Consciousness is Data Compression - Maguire and Maguire 2010
    downloadable here or see GoogleScholar.
    Although the title of this paper is obviously not a true statement, it contains a very interesting discussion about compression, how a model of the world is built, and why that model must include the self - hence it probably is true that consciousness requires data compression.
    Page 749, third paragraph:
    “Algorithmic information theory reveals that compression is the only systematic means for generating predictions based on prior observations. All successful predictive systems, including animals and humans, are approximations of algorithmic induction.”
    Page 750, first and second paragraphs: “...the compression carried out by the brain has one additional ingredient which sets it apart from simpler compression systems: it compresses its observations of its own behaviour. The capacity for a system to model its own actions necessarily involves the identification of itself as an entity separate to its surroundings. As a result, self-compression entails self-awareness.
    The human brain is a self-representational structure which seeks to understand its own behaviour. For example, people model their own selves in order to more accurately predict how they are going to feel and react in different situations. They build up internal models about who they think they are and use these models to inform their decisions. In addition, the human brain compresses the observed behaviour of other organisms. When we watch other individuals, we realize that there is a great deal of redundancy in their activity: rather than simply cataloguing and memorizing every action they perform, we can instead posit the more succinct hypothesis of a concise 'self' which motivates these actions. By representing this self we can then make accurate predictions as to how the people around us will behave. The idea that the actions of an organism are controlled by a singular self is merely a theoretical model which eliminates redundancy in the observed behaviour of that organism. People apply this same process to themselves: what you consider to be the essence of you is simply a model which compresses your observations of your own past behaviour.”
  14. ^ Fractal and chaotic dynamics in nervous systems - King 1991
    doi: 10.1016/0301-0082(91)90003-J downloadable here or see GoogleScholar.
    (The page numbers in the contents do not match with the page numbers in this very technical paper)
    Second sentence of summary, page 30:
    “The relation of chaos to fractal processes in the brain from the neurosystems level down to the molecule has been explored. It is found that chaos appears to play an integral, though not necessarily exclusive role in function at all levels of organization from the neurosystems to the molecular and quantum levels.”
  15. ^ Free Will, Physics, Biology, and the Brain - Koch 2009
    Chapter 2 in Downward Causation and the Neurobiology of Free Will ed. Murphy, Ellis and O'Connor pub. Springer 2009
    doi: 10.1007/978-3-642-03205-9_2 downloadable here or see GoogleScholar.
    Page 36, last sentence, to page 37: “...astronomers cannot be certain whether Pluto will be on this side of the sun (relative to Earth’s position) or the other side ten million years from now! No matter how small the residue of our measurement error, it will never vanish and therefore will always limit how far we can peer into the future. If this uncertainty holds for the position of a planet-sized body in deep space, what does this portend for the predictability of a single synapse deeply embedded inside a brain, let alone the action of a nervous system of millions or billions of nerve cells, each one encrusted with thousands of synapses? Given the nonlinear and cooperative nature of such neural networks, their behavior is chaotic to a high degree. ... Any organelle, such as the nucleus of a cell or a synapse, is made out of a fantastically large number of molecules suspended in watery solution. These molecules incessantly jostle and move about in a way that can’t be precisely captured; this is called noise. Physicists are unable to track individual molecules. To tame this noise, they borrow techniques from statistics and from probability theory, calculating the average kinetic energy of the molecules or the average time between synaptic release and so on.”
  16. ^ Is there chaos in the brain? II. Experimental evidence and related models - Korn and Faure 2003
    doi: 10.1016/j.crvi.2003.09.011 downloadable here or see GoogleScholar.
    Abstract, end of second paragraph: “Here we present the data and main arguments that support the existence of chaos at all levels from the simplest to the most complex forms of organization of the nervous system.”
  17. ^ Ibid. Is there chaos in the brain? II. Experimental evidence and related models
    Page 824, end of first paragraph onwards: “Thus another classical paradigm, called the 'winnerless competition model' (WLC), is advocated by G. Laurent and his collaborators. Like other nonlinear models, WLC is based on simple nonlinear equations of the Lotka-Volterra type where (i) the functional unit is the neuron or a small group of synchronized cells and (ii) the neurons interact through inhibitory connections. Several dynamics can then arise, depending for a large part on the nature of this coupling and the strength of the inhibitory connections. If the connections are symmetrical, and in some conditions of coupling, the system behaves as a Hopfield network or it has only one favored attractor if all the neurons are active. If the connections are only partly asymmetrical, one attractor (which often corresponds to the activity of one neuron) will emerge in a 'winner-takes-all' type of circuit. Finally a 'weakly chaotic' WLC arises when all the inhibitory connections are nonsymmetrical; then, the system, with N competitive neurons, has different heteroclinic orbits in the phase space. In this case, and for various values of the inhibitory strengths, the system’s activity 'bounces off' between groups of neurons: if the stimulus is changed, another orbit in the vicinity of the heteroclinic orbit becomes a global attractor.”
  18. ^ Broadband Criticality of Human Brain Network Synchronization - Kitzbichler, Smith, Christensen and Bullmore 2009
    doi: 10.1371/journal.pcbi.1000314 downloadable here or see GoogleScholar.
    Towards end of abstract: “These results strongly suggest that human brain functional systems exist in an endogenous state of dynamical criticality, characterized by a greater than random probability of both prolonged periods of phase-locking and occurrence of large rapid changes in the state of global synchronization, analogous to the neuronal 'avalanches' previously described in cellular systems.”
  19. ^ Daily Oscillation of the Excitation-Inhibition Balance in Visual Cortical Circuits - Bridi, Zong, Min, Luo, Tran, Qiu, Severin, Zhang, Wang, Zhu, He and Kirkwood 2020
    doi: 10.1016/j.neuron.2019.11.011 downloadable GoogleScholar.
    Summary, page 621: “A balance between synaptic excitation and inhibition (E/I balance) maintained within a narrow window is widely regarded to be crucial for cortical processing. In line with this idea, the E/I balance is reportedly comparable across neighboring neurons, behavioral states, and developmental stages and altered in many neurological disorders. Motivated by these ideas, we examined whether synaptic inhibition changes over the 24-h day to compensate for the well-documented sleep-dependent changes in synaptic excitation. We found that, in pyramidal cells of visual and prefrontal cortices and hippocampal CA1, synaptic inhibition also changes over the 24-h light/dark cycle but, surprisingly, in the opposite direction of synaptic excitation. Inhibition is upregulated in the visual cortex during the light phase in a sleep-dependent manner. In the visual cortex, these changes in the E/I balance occurred in feedback, but not feedforward, circuits. These observations open new and interesting questions on the function and regulation of the E/I balance.”
  20. ^ Winnerless competition in clustered balanced networks: inhibitory assemblies do the trick - Rost, Deger and Nawrot 2017
    doi: 10.1007/s00422-017-0737-7 downloadable here or see GoogleScholar.
    Beginning of abstract: “Balanced networks are a frequently employed basic model for neuronal networks in the mammalian neocortex. Large numbers of excitatory and inhibitory neurons are recurrently connected so that the numerous positive and negative inputs that each neuron receives cancel out on average. Neuronal firing is therefore driven by fluctuations in the input and resembles the irregular and asynchronous activity observed in cortical in vivo data. Recently, the balanced network model has been extended to accommodate clusters of strongly interconnected excitatory neurons in order to explain persistent activity in working memory-related tasks. This clustered topology introduces multistability and winnerless competition between attractors.”
    Beginning of Introduction: “Neural responses in the mammalian neocortex are notoriously variable. Even when identical sensory stimuli are provided and animal behaviour is consistent across repetitions of experimental tasks, the neuronal responses look very different each time. This variability is found on a wide range of temporal and spatial scales. To this day, it remains a matter of discussion how the brain can cope with this variability or whether it might even be an essential part of neural computation. It has been shown that in network models of randomly connected excitatory and inhibitory neurons a condition exists in which these neurons fire in a chaotic manner at low firing rates. This condition was termed the Balanced State and occurs if excitation and inhibition to each cell cancel each other on average so that spike emission is triggered by fluctuations in the input current rather than by elevation of the mean input current.”
    Page 95, “5. Conclusions and prospects”, second sentence: “We have shown that multistability with moderate firing rates can be achieved in balanced networks with joint excitatory and inhibitory clusters. This architecture allows for robust winnerless competition dynamics without rate saturation over a wide range of cluster strengths.”
  21. ^ Epilepsy - when chaos fails - Sackellares, Iasemidis, Shiau, Gilmore and Roper 2000
    doi: 10.1142/9789812793782_0010 downloadable here or see GoogleScholar.
    Abstract, towards end of first page, and second page: “We have postulated that epileptic brains, being chaotic nonlinear systems, repeatedly make the abrupt transitions into and out of the ictal state [episodic paroxysmal electrical discharges] because the epileptogenic focus drives them into self-organizing phase transitions from chaos to order. ... an epileptic seizure occurs when spatiotemporal chaos in the brain fails...”

Page last uploaded Sat Mar 2 02:55:42 2024 MST