the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colours and tastes
into something concrete and measurable, which he dubs the Pretty-Hard Problem of Consciousness:
a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict
and shows that Tononi’s IIT fails to solve the latter. He does it by constructing a counterexample which has arbitrarily high integrated information (more than a human brain) while doing nothing anyone would call conscious. He also notes that building a theory of consciousness around information integration is not a promising approach in general:
As humans, we seem to have the intuition that global integration of information is such a powerful property that no “simple” or “mundane” computational process could possibly achieve it. But our intuition is wrong. If it were right, then we wouldn’t have linear-size superconcentrators or LDPC codes.
Scott is very good at instrumentalizing vague ideas (what lukeprog calls hacking away at the edges). He did the same for the notion of “free will” in his paper The Ghost in the Quantum Turing Machine. His previous blog entry was about “The NEW Ten Most Annoying Questions in Quantum Computing”, which are some of the “edges” to hack at when thinking about the “deep” and “hard” problems of Quantum Computing. This approach has been very successful in the past:
of the nine questions, six have by now been completely settled
after 8 years of work.
I hope that there are people at MIRI who are similarly good at instrumentalizing big ideas into interesting yet solvable questions.
Here is my summary of his post and some related thoughts.
Scott instrumentalizes Chalmers’ vague Hard problem of consciousness:
into something concrete and measurable, which he dubs the Pretty-Hard Problem of Consciousness:
and shows that Tononi’s IIT fails to solve the latter. He does it by constructing a counterexample which has arbitrarily high integrated information (more than a human brain) while doing nothing anyone would call conscious. He also notes that building a theory of consciousness around information integration is not a promising approach in general:
Scott is very good at instrumentalizing vague ideas (what lukeprog calls hacking away at the edges). He did the same for the notion of “free will” in his paper The Ghost in the Quantum Turing Machine. His previous blog entry was about “The NEW Ten Most Annoying Questions in Quantum Computing”, which are some of the “edges” to hack at when thinking about the “deep” and “hard” problems of Quantum Computing. This approach has been very successful in the past:
after 8 years of work.
I hope that there are people at MIRI who are similarly good at instrumentalizing big ideas into interesting yet solvable questions.