Integrated information theory

From Wikipedia, the free encyclopedia
Phi, the symbol used for integrated information

Integrated information theory (IIT) attempts to provide a framework capable of explaining why some physical systems (such as human brains) are conscious,[1] why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky),[2] and what it would take for other physical systems to be conscious (are dogs conscious? what about unborn babies? or computers?).[3] In principle, once the theory is mature and has been tested extensively in controlled conditions, the IIT framework may be capable of providing a concrete inference about whether any physical system is conscious, to what degree it is conscious, and what particular experience it is having. In IIT, a system's consciousness (what it is like subjectively) is conjectured to be identical to its causal properties (what it is like objectively). Therefore it should be possible to account for the conscious experience of a physical system by unfolding its complete causal powers (see Central identity).[4]

IIT was proposed by neuroscientist Giulio Tononi in 2004.[5] The latest version of the theory, labeled IIT 3.0, was published in 2014.[6][1] However, the theory is still in development, as is evident from the later publications improving on the formalism presented in IIT 3.0.[7][2][8][9]

Overview[]

Relationship to the "hard problem of consciousness"[]

David Chalmers has argued that any attempt to explain consciousness in purely physical terms (i.e. to start with the laws of physics as they are currently formulated and derive the necessary and inevitable existence of consciousness) eventually runs into the so-called "hard problem". Rather than try to start from physical principles and arrive at consciousness, IIT "starts with consciousness" (accepts the existence of our own consciousness as certain) and reasons about the properties that a postulated physical substrate would need to have in order to account for it. The ability to perform this jump from phenomenology to mechanism rests on IIT's assumption that if the formal properties of a conscious experience can be fully accounted for by an underlying physical system, then the properties of the physical system must be constrained by the properties of the experience. The limitations on the physical system for consciousness to exist is unknown and may exist on spectrum implied by studies involving split brain patients and conscious patients with large amounts of brain matter missing.

Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed "axioms") and, from there, the essential properties of conscious physical systems (dubbed "postulates").

Axioms: essential properties of experience[]

Axioms and postulates of integrated information theory

The axioms are intended to capture the essential aspects of every conscious experience. Every axiom should apply to every possible experience.

The wording of the axioms has changed slightly as the theory has developed, and the most recent and complete statement of the axioms is as follows:

  • Intrinsic existence: Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual).
  • Composition: Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
  • Information: Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order "bindings" of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.
  • Integration: Consciousness is unified: each experience is irreducible and cannot be subdivided into non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word "BECAUSE" written in the middle of a blank page is not reducible to an experience of seeing "BE" on the left plus an experience of seeing "CAUSE" on the right. Similarly, seeing a blue book is not reducible to seeing a book without the color blue, plus the color blue without the book.
  • Exclusion: Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure. Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.
    — Dr. Giulio Tononi, Integrated information theory, Scholarpedia[1]

Postulates: properties required of the physical substrate[]

The axioms describe regularities in conscious experience, and IIT seeks to explain these regularities. What could account for the fact that every experience exists, is structured, is differentiated, is unified, and is definite? IIT argues that the existence of an underlying causal system with these same properties offers the most parsimonious explanation. Thus a physical system, if conscious, is so by virtue of its causal properties.

The properties required of a conscious physical substrate are called the "postulates," since the existence of the physical substrate is itself only postulated (remember, IIT maintains that the only thing one can be sure of is the existence of one's own consciousness). In what follows, a "physical system" is taken to be a set of elements, each with two or more internal states, inputs that influence that state, and outputs that are influenced by that state (neurons or logic gates are the natural examples). Given this definition of "physical system", the postulates are:

  • Intrinsic existence: To account for the intrinsic existence of experience, a system constituted of elements in a state must exist intrinsically (be actual): specifically, in order to exist, it must have cause-effect power, as there is no point in assuming that something exists if nothing can make a difference to it, or if it cannot make a difference to anything. Moreover, to exist from its own intrinsic perspective, independent of external observers, a system of elements in a state must have cause-effect power upon itself, independent of extrinsic factors. Cause-effect power can be established by considering a cause-effect space with an axis for every possible state of the system in the past (causes) and future (effects). Within this space, it is enough to show that an "intervention" that sets the system in some initial state (cause), keeping the state of the elements outside the system fixed (background conditions), can lead with probability different from chance to its present state; conversely, setting the system to its present state leads with probability above chance to some other state (effect).
  • Composition: The system must be structured: subsets of the elements constituting the system, composed in various combinations, also have cause-effect power within the system. Thus, if a system ABC is constituted of elements A, B, and C, any subset of elements (its power set), including A, B, C, AB, AC, BC, as well as the entire system, ABC, can compose a mechanism having cause-effect power. Composition allows for elementary (first-order) elements to form distinct higher-order mechanisms, and for multiple mechanisms to form a structure.
  • Information: The system must specify a cause-effect structure that is the particular way it is: a specific set of specific cause-effect repertoires—thereby differing from other possible ones (differentiation). A cause-effect repertoire characterizes in full the cause-effect power of a mechanism within a system by making explicit all its cause-effect properties. It can be determined by perturbing the system in all possible ways to assess how a mechanism in its present state makes a difference to the probability of the past and future states of the system. Together, the cause-effect repertoires specified by each composition of elements within a system specify a cause-effect structure. ...
  • Integration: The cause-effect structure specified by the system must be unified: it must be intrinsically irreducible to that specified by non-interdependent sub-systems obtained by unidirectional partitions. Partitions are taken unidirectionally to ensure that cause-effect power is intrinsically irreducible—from the system's intrinsic perspective—which implies that every part of the system must be able to both affect and be affected by the rest of the system. Intrinsic irreducibility can be measured as integrated information ("big phi" or , a non-negative number), which quantifies to what extent the cause-effect structure specified by a system's elements changes if the system is partitioned (cut or reduced) along its minimum partition (the one that makes the least difference). By contrast, if a partition of the system makes no difference to its cause-effect structure, then the whole is reducible to those parts. If a whole has no cause-effect power above and beyond its parts, then there is no point in assuming that the whole exists in and of itself: thus, having irreducible cause-effect power is a further prerequisite for existence. This postulate also applies to individual mechanisms: a subset of elements can contribute a specific aspect of experience only if their combined cause-effect repertoire is irreducible by a minimum partition of the mechanism ("small phi" or ).
  • Exclusion: The cause-effect structure specified by the system must be definite: it is specified over a single set of elements—neither less nor more—the one over which it is maximally irreducible from its intrinsic perspective (), thus laying maximal claim to intrinsic existence. ... With respect to causation, this has the consequence that the "winning" cause-effect structure excludes alternative cause-effect structures specified over overlapping elements, otherwise there would be causal overdetermination. ... The exclusion postulate can be said to enforce Occam's razor (entities should not be multiplied beyond necessity): it is more parsimonious to postulate the existence of a single cause-effect structure over a system of elements—the one that is maximally irreducible from the system's intrinsic perspective—than a multitude of overlapping cause-effect structures whose existence would make no further difference. The exclusion postulate also applies to individual mechanisms: a subset of elements in a state specifies the cause-effect repertoire that is maximally irreducible (MICE) within the system (), called a core concept, or concept for short. Again, it cannot additionally specify a cause-effect repertoire overlapping over the same elements, because otherwise the difference a mechanism makes would be counted multiple times. ... Finally, the exclusion postulate also applies to spatio-temporal grains, implying that a conceptual structure is specified over a definite grain size in space (either quarks, atoms, neurons, neuronal groups, brain areas, and so on) and time (either microseconds, milliseconds, seconds, minutes, and so on), the one at which reaches a maximum. ... Once more, this implies that a mechanism cannot specify a cause-effect repertoire at a particular temporal grain, and additional effects at a finer or coarser grain, otherwise the differences a mechanism makes would be counted multiple times.
    — Dr. Giulio Tononi, Integrated information theory, Scholarpedia[1]

Mathematics: formalization of the postulates[]

For a complete and thorough account of the mathematical formalization of IIT, see reference.[6] What follows is intended as a brief summary, adapted from,[10] of the most important quantities involved. Pseudocode for the algorithms used to calculate these quantities can be found at reference.[11] For a visual illustration of the algorithm, see the supplementary material of the paper describing the PyPhi toolbox.[12]

A system refers to a set of elements, each with two or more internal states, inputs that influence that state, and outputs that are influenced by that state. A mechanism refers to a subset of system elements. The mechanism-level quantities below are used to assess the integration of any given mechanism, and the system-level quantities are used to assess the integration of sets of mechanisms ("sets of sets").

In order to apply the IIT formalism to a system, its full transition probability matrix (TPM) must be known. The TPM specifies the probability with which any state of a system transitions to any other system state. Each of the following quantities is calculated in a bottom-up manner from the system's TPM.

Mechanism-level quantities
A cause-effect repertoire is a set of two probability distributions, describing how the mechanism in its current state constrains the past and future states of the sets of system elements and , respectively.

Note that may be different from , since the elements that a mechanism affects may be different from the elements that affect it.

A partition is a grouping of system elements, where the connections between the parts and are injected with independent noise. For a simple binary element which outputs to a simple binary element , injecting the connection with independent noise means that the input value which receives, or , is entirely independent of the actual state of , thus rendering causally ineffective.

denotes a pair of partitions, one of which is considered when looking at a mechanism's causes, and the other of which is considered when looking at its effects.

The earth mover's distance is used to measure distances between probability distributions and . The EMD depends on the user's choice of ground distance between points in the metric space over which the probability distributions are measured, which in IIT is the system's state space. When computing the EMD with a system of simple binary elements, the ground distance between system states is chosen to be their Hamming distance.
Integrated information measures the irreducibility of a cause-effect repertoire with respect to partition , obtained by combining the irreducibility of its constituent cause and effect repertoires with respect to the same partitioning.

The irreducibility of the cause repertoire with respect to is given by , and similarly for the effect repertoire.

Combined, and yield the irreducibility of the as a whole: .

The minimum-information partition of a mechanism and its purview is given by . The minimum-information partition is the partitioning that least affects a cause-effect repertoire. For this reason, it is sometimes called the minimum-difference partition.

Note that the minimum-information "partition", despite its name, is really a pair of partitions. We call these partitions and .

There is at least one choice of elements over which a mechanism's cause-effect repertoire is maximally irreducible (in other words, over which its is highest). We call this choice of elements , and say that this choice specifies a maximally irreducible cause-effect repertoire.

Formally, and .

The concept is the maximally irreducible cause-effect repertoire of mechanism in its current state over , and describes the causal role of within the system. Informally, is the concept's purview, and specifies what the concept "is about".

The intrinsic cause-effect power of is the concept's strength, and is given by:

System-level quantities
A cause-effect structure is the set of concepts specified by all mechanisms with within the system in its current state . If a system turns out to be conscious, its cause-effect structure is often referred to as a conceptual structure.
A unidirectional partition is a grouping of system elements where the connections from the set of elements to are injected with independent noise.
The extended earth mover's distance is used to measure the minimal cost of transforming cause-effect structure into structure . Informally, one can say that–whereas the EMD transports the probability of a system state over the distance between two system states–the XEMD transports the strength of a concept over the distance between two concepts.

In the XEMD, the "earth" to be transported is intrinsic cause-effect power (), and the ground distance between concepts and with cause repertoires and and effect repertoires and is given by .

Integrated (conceptual) information measures the irreducibility of a cause-effect structure with respect to a unidirectional partition. captures how much the cause-effect repertoires of the system's mechanisms are altered and how much intrinsic cause effect power () is lost due to partition .
The minimum-information partition of a set of elements in a state is given by . The minimum-information partition is the unidirectional partition that least affects a cause-effect structure .
The intrinsic cause-effect power of a set of elements in a state is given by , such that for any other with , . According to IIT, a system's is the degree to which it can be said to exist.
A complex is a set of elements with , and thus specifies a maximally irreducible cause-effect structure, also called a conceptual structure. According to IIT, complexes are conscious entities.

Cause-effect space[]

For a system of simple binary elements, cause-effect space is formed by axes, one for each possible past and future state of the system. Any cause-effect repertoire , which specifies the probability of each possible past and future state of the system, can be easily plotted as a point in this high-dimensional space: The position of this point along each axis is given by the probability of that state as specified by . If a point is also taken to have a scalar magnitude (which can be informally thought of as the point's "size", for example), then it can easily represent a concept: The concept's cause-effect repertoire specifies the location of the point in cause-effect space, and the concept's value specifies that point's magnitude.

In this way, a conceptual structure can be plotted as a constellation of points in cause-effect space. Each point is called a star, and each star's magnitude () is its size.

Central identity[]

IIT addresses the mind-body problem by proposing an identity between phenomenological properties of experience and causal properties of physical systems: The conceptual structure specified by a complex of elements in a state is identical to its experience.

Specifically, the form of the conceptual structure in cause-effect space completely specifies the quality of the experience, while the irreducibility of the conceptual structure specifies the level to which it exists (i.e., the complex's level of consciousness). The maximally irreducible cause-effect repertoire of each concept within a conceptual structure specifies what the concept contributes to the quality of the experience, while its irreducibility specifies how much the concept is present in the experience.

According to IIT, an experience is thus an intrinsic property of a complex of mechanisms in a state.

Extensions[]

The calculation of even a modestly-sized system's is often computationally intractable,[12] so efforts have been made to develop heuristic or proxy measures of integrated information. For example, Masafumi Oizumi and colleagues have developed both [13] and geometric integrated information or ,[14] which are practical approximations for integrated information. These are related to proxy measures developed earlier by Anil Seth and Adam Barrett.[15] However, none of these proxy measures have a mathematically proven relationship to the actual value, which complicates the interpretation of analyses that use them. They can give qualitatively different results even for very small systems.[16]

A significant computational challenge in calculating integrated information is finding the Minimum Information Partition of a neural system, which requires iterating through all possible network partitions. To solve this problem, Daniel Toker and Friedrich T. Sommer have shown that the spectral decomposition of the correlation matrix of a system's dynamics is a quick and robust proxy for the Minimum Information Partition.[17]

Related experimental work[]

While the algorithm[12][11] for assessing a system's and conceptual structure is relatively straightforward, its high time complexity makes it computationally intractable for many systems of interest.[12] Heuristics and approximations can sometimes be used to provide ballpark estimates of a complex system's integrated information, but precise calculations are often impossible. These computational challenges, combined with the already difficult task of reliably and accurately assessing consciousness under experimental conditions, make testing many of the theory's predictions difficult.

Despite these challenges, researchers have attempted to use measures of information integration and differentiation to assess levels of consciousness in a variety of subjects.[18][19] For instance, a recent study using a less computationally-intensive proxy for was able to reliably discriminate between varying levels of consciousness in wakeful, sleeping (dreaming vs. non-dreaming), anesthetized, and comatose (vegetative vs. minimally-conscious vs. locked-in) individuals.[20]

IIT also makes several predictions which fit well with existing experimental evidence, and can be used to explain some counterintuitive findings in consciousness research.[1] For example, IIT can be used to explain why some brain regions, such as the cerebellum do not appear to contribute to consciousness, despite their size and/or functional importance.

Reception[]

Integrated Information Theory has received both broad criticism and support.

Support[]

Neuroscientist Christof Koch, who has helped to develop later versions of the theory, has called IIT "the only really promising fundamental theory of consciousness".[21] Technologist and ex-IIT researcher Virgil Griffith says "IIT is currently the leading theory of consciousness." However, his answer to whether IIT is exactly the right theory is ‘Probably not’.[22]

Neuroscientist and consciousness researcher Anil Seth is supportive of the theory, with some caveats, claiming that "conscious experiences are highly informative and always integrated."; and that "One thing that immediately follows from [IIT] is that you have a nice post hoc explanation for certain things we know about consciousness.". But he also claims "the parts of IIT that I find less promising are where it claims that integrated information actually is consciousness — that there’s an identity between the two.",[23] and has criticized the panpsychist extrapolations of the theory.[24]

Philosopher David Chalmers, famous for the idea of the hard problem of consciousness, has expressed some enthusiasm about IIT. According to Chalmers, IIT is a development in the right direction, whether or not it is correct.[25]

Philosopher Daniel Dennett considers IIT a theory of consciousness in terms of “integrated information that uses Shannon information theory in a novel way”. As such it has “a very limited role for aboutness: it measures the amount of Shannon information a system or mechanism has about its own previous state—i.e., the states of all its parts”.[26]

Physicist Max Tegmark has also expressed some support for the approach taken by IIT, and considers it compatible with his own ideas about consciousness as a "state of matter". Tegmark has also tried to address the problem of the computational complexity behind the calculations. According to Max Tegmark “the integration measure proposed by IIT is computationally infeasible to evaluate for large systems, growing super-exponentially with the system’s information content.”[27] As a result, Φ can only be approximated in general. However, different ways of approximating Φ provide radically different results.[28] Other works have shown that Φ can be computed in some large mean-field neural network models, although some assumptions of the theory have to be revised to capture phase transitions in these large systems.[29][30]

Criticism[]

One criticism made is that the claims of IIT as a theory of consciousness “are not scientifically established or testable at the moment”.[31] However, while it is true that the complete analysis suggested by IIT cannot be completed at the moment for human brains, IIT has already been applied to models of visual cortex to rigorously and successfully to explain why visual space feels the way it does.[2]

Neuroscientists Björn Merker, David Rudrauf and Philosopher Kenneth Williford co-authored a paper criticizing IIT on several grounds. Firstly, by not demonstrating that all members of systems which do in fact combine integration and differentiation in the formal IIT sense are conscious, systems which demonstrate high levels of integration and differentiation of information might provide the necessary conditions for consciousness but those combinations of attributes do not amount to the conditions for consciousness. Secondly that the measure, Φ, reflects efficiency of global information transfer rather than level of consciousness, and that the correlation of Φ with level of consciousness through different states of wakefulness (e.g. awake, dreaming and dreamless sleep, anesthesia, seizures and coma) actually reflect the level of efficient network interactions performed for cortical engagement. Hence Φ reflects network efficiency rather than consciousness, which would be one of the functions served by cortical network efficiency.[32] Of course, IIT emphasizes the importance of all five postulates being satisfied (not just information and integration) and does not claim that Φ is identical to consciousness, undermining the authors credibility on the topic of IIT and leaving their main criticism hamstrung.[33]

Princeton neuroscientist Michael Graziano rejects IIT as pseudoscience. He claims IIT is a "magicalist theory" that has "no chance of scientific success or understanding".[34]

Theoretical computer scientist Scott Aaronson has criticized IIT by demonstrating through its own formulation that an inactive series of logic gates, arranged in the correct way, would not only be conscious but be “unboundedly more conscious than humans are.”[35] Tononi himself agrees with the assessment and argues that according to IIT, an even simpler arrangement of inactive logic gates, if large enough, would also be conscious. However he further argues that this is a strength of IIT rather than a weakness, because that's exactly the sort of cytoarchitecture followed by large portions of the cerebral cortex.[36][37]

A peer-reviewed commentary by 58 scholars involved in the scientific study of consciousness rejects these conclusions about logic gates as “mysterious and unfalsifiable claims” that should be distinguished from “empirically productive hypotheses”.[38][clarification needed] IIT as a scientific theory of consciousness has been criticized in the scientific literature as only able to be “either false or unscientific” by its own definitions.[39] IIT has also been denounced by other members of the consciousness field as requiring “an unscientific leap of faith”, but it is not clear that this is in fact the case if the theory is properly understood.[40] The theory has also been derided for failing to answer the basic questions required of a theory of consciousness. Philosopher Adam Pautz says “As long as proponents of IIT do not address these questions, they have not put a clear theory on the table that can be evaluated as true or false.”[41]

Influential philosopher John Searle has given a critique of theory saying "The theory implies panpsychism" and "The problem with panpsychism is not that it is false; it does not get up to the level of being false. It is strictly speaking meaningless because no clear notion has been given to the claim.".[42] However, whether or not a theory has panpsychist implications (that all or most of what exists physically must be, be part of something that is, or be composed of parts that are, conscious) has no bearing on the scientific validity of the theory. Searle's take has also been countered by other philosophers, for misunderstanding and misrepresenting a theory that is actually resonant with his own ideas.[43]

The mathematics of IIT have also been criticized since “having a high Φ value requires highly specific structures that are unstable to minor perturbations”.[44] This susceptibility to minor perturbations seems inconsistent with empirical results about neuroplasticity in the human brain, and thus weakening the theory. However, the systems investigated by Schwitzgebel were small networks of logic gates, and not human brains in normal waking conditions, and the generalizability to systems about which we have access to verified conscious experience (human beings) is questionable.


Philosopher Tim Bayne has criticized the axiomatic foundations of the theory.[45] He concludes that “the so-called ‘axioms’ that Tononi et al. appeal to fail to qualify as genuine axioms”.

See also[]

References[]

  1. ^ a b c d e Tononi, Giulio (2015). "Integrated information theory". Scholarpedia. 10 (1): 4164. Bibcode:2015SchpJ..10.4164T. doi:10.4249/scholarpedia.4164.
  2. ^ a b c Haun, Andrew; Tononi, Giulio (December 2019). "Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience". Entropy. 21 (12): 1160. Bibcode:2019Entrp..21.1160H. doi:10.3390/e21121160. PMC 7514505.
  3. ^ Tononi, Giulio; Koch, Christof (2015-05-19). "Consciousness: here, there and everywhere?". Philosophical Transactions of the Royal Society B: Biological Sciences. 370 (1668): 20140167. doi:10.1098/rstb.2014.0167. ISSN 0962-8436. PMC 4387509. PMID 25823865.
  4. ^ Tononi, Giulio; Boly, Melanie; Massimini, Marcello; Koch, Christof (2016). "Integrated information theory: from consciousness to its physical substrate". Nature Reviews Neuroscience. 17 (7): 450–461. doi:10.1038/nrn.2016.44. PMID 27225071. S2CID 21347087.
  5. ^ Tononi, Giulio (2004-11-02). "An information integration theory of consciousness". BMC Neuroscience. 5 (1): 42. doi:10.1186/1471-2202-5-42. ISSN 1471-2202. PMC 543470. PMID 15522121.
  6. ^ a b Oizumi, Masafumi; Albantakis, Larissa; Tononi, Giulio (2014-05-08). "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0". PLOS Comput Biol. 10 (5): e1003588. Bibcode:2014PLSCB..10E3588O. doi:10.1371/journal.pcbi.1003588. PMC 4014402. PMID 24811198.
  7. ^ Barbosa, Leonardo S.; Marshall, William; Streipert, Sabrina; Albantakis, Larissa; Tononi, Giulio (2020-11-02). "A measure for intrinsic information". Scientific Reports. 10 (1): 18803. Bibcode:2020NatSR..1018803B. doi:10.1038/s41598-020-75943-4. ISSN 2045-2322. PMC 7606539. PMID 33139829.
  8. ^ Barbosa, Leonardo S.; Marshall, William; Albantakis, Larissa; Tononi, Giulio (March 2021). "Mechanism Integrated Information". Entropy. 23 (3): 362. Bibcode:2021Entrp..23..362B. doi:10.3390/e23030362. PMC 8003304. PMID 33803765.
  9. ^ Marshall, William; Albantakis, Larissa; Tononi, Giulio (2018-04-23). Schrater, Paul (ed.). "Black-boxing and cause-effect power". PLOS Computational Biology. 14 (4): e1006114. arXiv:1608.03461. Bibcode:2018PLSCB..14E6114M. doi:10.1371/journal.pcbi.1006114. ISSN 1553-7358. PMC 5933815. PMID 29684020.
  10. ^ Albantakis, Larissa; Tononi, Giulio (2015-07-31). "The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats". Entropy. 17 (8): 5472–5502. Bibcode:2015Entrp..17.5472A. doi:10.3390/e17085472.
  11. ^ a b "CSC-UW/iit-pseudocode". GitHub. Retrieved 2016-01-29.
  12. ^ a b c d Mayner, William G. P.; Marshall, William; Albantakis, Larissa; Findlay, Graham; Marchman, Robert; Tononi, Giulio (2018-07-26). "PyPhi: A toolbox for integrated information theory". PLOS Computational Biology. 14 (7): e1006343. arXiv:1712.09644. Bibcode:2018PLSCB..14E6343M. doi:10.1371/journal.pcbi.1006343. ISSN 1553-7358. PMC 6080800. PMID 30048445.
  13. ^ Oizumi, Masafumi; Amari, Shun-ichi; Yanagawa, Toru; Fujii, Naotaka; Tsuchiya, Naotsugu (2015-05-17). "Measuring integrated information from the decoding perspective". PLOS Computational Biology. 12 (1): e1004654. arXiv:1505.04368. Bibcode:2016PLSCB..12E4654O. doi:10.1371/journal.pcbi.1004654. PMC 4721632. PMID 26796119.
  14. ^ Oizumi, Masafumi; Tsuchiya, Naotsugu; Amari, Shun-ichi (20 December 2016). "Unified framework for information integration based on information geometry". Proceedings of the National Academy of Sciences. 113 (51): 14817–14822. doi:10.1073/pnas.1603583113. PMC 5187746. PMID 27930289.
  15. ^ Barrett, A.B.; Seth, A.K. (2011). "Practical measures of integrated information for time-series data". PLOS Comput. Biol. 7 (1): e1001052. Bibcode:2011PLSCB...7E1052B. doi:10.1371/journal.pcbi.1001052. PMC 3024259. PMID 21283779.
  16. ^ Mediano, Pedro; Seth, Anil; Barrett, Adam (2018-12-25). "Measuring Integrated Information: Comparison of Candidate Measures in Theory and Simulation". Entropy. 21 (1): 17. arXiv:1806.09373. Bibcode:2018Entrp..21...17M. doi:10.3390/e21010017. ISSN 1099-4300. PMC 7514120. PMID 33266733.
  17. ^ Toker, Daniel; Sommer, Friedrich T.; Marinazzo, Daniele (7 February 2019). "Information integration in large brain networks". PLOS Computational Biology. 15 (2): e1006807. Bibcode:2019PLSCB..15E6807T. doi:10.1371/journal.pcbi.1006807. PMC 6382174. PMID 30730907.
  18. ^ Massimini, M.; Ferrarelli, F.; Murphy, Mj; Huber, R.; Riedner, Ba; Casarotto, S.; Tononi, G. (2010-09-01). "Cortical reactivity and effective connectivity during REM sleep in humans". Cognitive Neuroscience. 1 (3): 176–183. doi:10.1080/17588921003731578. ISSN 1758-8936. PMC 2930263. PMID 20823938.
  19. ^ Ferrarelli, Fabio; Massimini, Marcello; Sarasso, Simone; Casali, Adenauer; Riedner, Brady A.; Angelini, Giuditta; Tononi, Giulio; Pearce, Robert A. (2010-02-09). "Breakdown in cortical effective connectivity during midazolam-induced loss of consciousness". Proceedings of the National Academy of Sciences of the United States of America. 107 (6): 2681–2686. Bibcode:2010PNAS..107.2681F. doi:10.1073/pnas.0913008107. ISSN 1091-6490. PMC 2823915. PMID 20133802.
  20. ^ Casali, Adenauer G.; Gosseries, Olivia; Rosanova, Mario; Boly, Mélanie; Sarasso, Simone; Casali, Karina R.; Casarotto, Silvia; Bruno, Marie-Aurélie; Laureys, Steven; Massimini, Marcello (2013-08-14). "A Theoretically Based Index of Consciousness Independent of Sensory Processing and Behavior". Science Translational Medicine. 5 (198): 198ra105. doi:10.1126/scitranslmed.3006294. hdl:2268/171542. ISSN 1946-6234. PMID 23946194. S2CID 8686961.
  21. ^ Zimmer, Carl (2010-09-20). "Sizing Up Consciousness by Its Bits". The New York Times. ISSN 0362-4331. Retrieved 2015-11-23.
  22. ^ "How valid is Giulio Tononi's mathematical formula for consciousness?".
  23. ^ Falk, Dan (2021-09-30). "Anil Seth Finds Consciousness in Life's Push Against Entropy". Quanta Magazine. Retrieved 2021-12-16.
  24. ^ akseth (2018-02-01). "Conscious spoons, really? Pushing back against panpsychism". NeuroBanter. Retrieved 2021-12-16.
  25. ^ Chalmers, David (1405350484), How do you explain consciousness?, retrieved 2021-12-16 Check date values in: |date= (help)
  26. ^ Dennett D., From Bacteria to Bach and Back., Norton and Co, New York, 2017, page 127.
  27. ^ Tegmark, Max (2016). "Improved Measures of Integrated Information". PLOS Computational Biology. 12 (11): e1005123. arXiv:1601.02626. Bibcode:2016PLSCB..12E5123T. doi:10.1371/journal.pcbi.1005123. PMC 5117999. PMID 27870846.
  28. ^ Mediano, Pedro; Seth, Anil; Barrett, Adam (2019). "Measuring Integrated Information: Comparison of Candidate Measures in Theory and Simulation". Entropy. 21 (1): 17. doi:10.3390/e21010017. PMC 7514120. PMID 33266733.
  29. ^ Aguilera, Miguel; Di Paolo, Ezequiel (2019). "Integrated information in the thermodynamic limit". Neural Networks. 114: 136–146. doi:10.1016/j.neunet.2019.03.001. PMID 30903946.
  30. ^ Aguilera, Miguel (2019). "Scaling Behaviour and Critical Phase Transitions in Integrated Information Theory". Entropy. 21 (12): 1198. Bibcode:2019Entrp..21.1198A. doi:10.3390/e21121198.
  31. ^ au, Hakwan (28 May 2020). "Open letter to NIH on Neuroethics Roadmap (BRAIN initiative) 2019". In Consciousness We Trust..
  32. ^ Merker, Björn (19 May 2021). "The Integrated Information Theory of consciousness: A case of mistaken identity". Behavioral and Brain Sciences: 1–72. doi:10.1017/S0140525X21000881. PMID 34006338. Retrieved 1 Jun 2021.
  33. ^ Oizumi, Masafumi; Albantakis, Larissa; Tononi, Giulio (2014-05-08). "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0". PLOS Computational Biology. 10 (5): e1003588. Bibcode:2014PLSCB..10E3588O. doi:10.1371/journal.pcbi.1003588. ISSN 1553-7358. PMC 4014402. PMID 24811198.
  34. ^ Jarrett, Christian (5 April 2020). "Consciousness: how can we solve the greatest mystery in science?". BBC Science Focus Magazine. Retrieved 2 February 2021.
  35. ^ Aaronson, Scott. "Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)". Shetl-Optimized: The Blog of Scott Aaronson.
  36. ^ Aaronson, Scott. "Giulio Tononi and Me: A Phi-nal Exchange". Shetl-Optimized: The Blog of Scott Aaronson.
  37. ^ Tononi, Giulio. "Why Scott should stare at a blank wall and reconsider (or, the conscious grid)". Shetl-Optimized: The Blog of Scott Aaronson.
  38. ^ Michel, Matthias; Beck, Diane; Block, Ned; Blumenfeld, Hal; Brown, Richard; Carmel, David; Carrasco, Marisa; Chirimuuta, Mazviita; Chun, Marvin; Cleeremans, Axel; Dehaene, Stanislas; Fleming, Stephen; Frith, Chris; Haggard, Patrick; He, Biyu; Heyes, Cecilia; Goodale, Mel; Irvine, Liz; Kawato, Mitsuo; Kentridge, Robert; King, JR; Knight, Robert; Kouider, Sid; Lamme, Victor; Lamy, Dominique; Lau, Hakwan; Laureys, Steven; LeDoux, Joseph; Lin, Ying-Tung; Liu, Kayuet; Macknik, Stephen; Martinez-Conde, Susana; Mashour, George; Melloni, Lucia; Miracchi, Lisa; Mylopoulos, Myrto; Naccache, Lionel; Owen, Adrian; Passingham, Richard; Pessoa, Luiz; Peters, Megan; Rahnev, Dobromir; Ro, Tony; Rosenthal, David; Sasaki, Yuka; Sergent, Claire; Solovey, Guillermo; Schiff, Nicholas; Seth, Anil; Tallon-Baudry, Catherine; Tamietto, Marco; Tong, Frank; van Gaal, Simon; Vlassova, Alexandra; Watanabe, Takeo; Weisberg, Josh; Yan, Karen; Yoshida, Masatoshi (February 4, 2019). "Opportunities and challenges for a maturing science of consciousness". Nature Human Behaviour. 3 (2): 104–107. doi:10.1038/s41562-019-0531-8. PMC 6568255. PMID 30944453.
  39. ^ Doerig, Adrian; Schruger, Aaron; Hess, Kathryn; Herzog, Michael (2019). "The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness". Consciousness and Cognition. 72: 49–59. doi:10.1016/j.concog.2019.04.002. PMID 31078047.
  40. ^ Lau, Hakwan; Michel, Matthias (2019). "On the dangers of conflating strong and weak versions of a theory of consciousness". PsyArXiv. doi:10.31234/osf.io/hjp3s.
  41. ^ Pautz, Adam (2019). "What is Integrated Information Theory?: A Catalogue of Questions". Journal of Consciousness Studies. 26` (1): 188–215.
  42. ^ Searle, John. "Can Information Theory Explain Consciousness?". The New York Review of Books.
  43. ^ Fallon, Francis (2020-09-01). "Integrated Information Theory, Searle, and the Arbitrariness Question". Review of Philosophy and Psychology. 11 (3): 629–645. doi:10.1007/s13164-018-0409-0. ISSN 1878-5166.
  44. ^ Schwitzgebel, Eric (9 November 2018). "The Phi Value of Integrated Information Theory Might Not Be Stable Across Small Changes in Neural Connectivity". The Splintered Mind: Reflections in Philosophy of Psychology, Broadly Contrued.
  45. ^ Bayne, Tim (2018). "On the axiomatic foundations of the integrated information theory of consciousness". Neuroscience of Consciousness. 2018 (1): niy007. doi:10.1093/nc/niy007. PMC 6030813. PMID 30042860.

External links[]

Related papers[]

Websites[]

Software[]

Books[]

News articles[]

Talks[]

Retrieved from ""