For example, in the hardware section you could bring up ASICs and FPGAs as technologies that vastly speed up particular algorithms—not an option ever available to humans except indirectly as tools.
In the mind section, you could point out the ability of an upload to wirehead itself, eliminating motivation and akrasia issues. (Perhaps a separate copy of the mind could be in charge of judging when the ‘real’ mind deserves a reward for taking care of a task.)
Or you could raise the possibility of entirely new sensory modalities, like the ‘code modality’ I think Eliezer proposed in LOGI—regular humans can gain new modalities with buzzing compass belts and electrical prickles onto the tongue and whatnot, but it’d be difficult to figure out a way more direct than 2D images for code. An upload could just feed the binary bits into an appropriate area of simulated neurons and let the network figure and adapt (like in the real-world examples of new sensory modalities.)
In a previous version of the paper, I had the following paragraphs. I deleted them when I added the current explanation of mental modules because I felt these became redundant. Do you think I should add them, or parts of them, back?
A digital mind could achieve qualitative improvements over the human reasoning by designing new kinds of mental modules. As an example of a mental module providing a qualitative advantage, children aged two understand the meaning of the word ”one”, but not that of other numbers. Six to nine months later, they learn what ”two” means. Some months later they learn the meaning of ”three”, and shortly thereafter they induce counting in general (Carey 2004). If we had general intelligence but no grasp of numbers, we would be incapable of thinking about mathematics, and therefore incapable of thinking many of the kinds of thoughts that are the basis of science.
There are a number of conditions in which humans lose various qualitative reasoning abilities, without the rest of their general intelligence being impaired. Dyslexia involves difficulty with reading and spelling, and manifests itself in people of all levels of intelligence (Shaywitz 1998). In neglect, patients lose awareness of part of their visual field. Ramachandran and Blakeslee (1998) report of a neglect patient who was shown, via a mirror, a pen on her neglected left side. While she consciously recognized the mirror as such and knew what it did, when asked to grab the pen she would claim it to be behind the mirror and attempt to reach through the mirror. Anosognosia patients (Cutting 1978) have a bodily disorder such as blindness or a disabled arm, but are unable to believe this, and instead confabulate explanations of why they happen to bump into things or how the disabled arm isn’t really theirs. Despite falsely believing themselves to be fully healthy, their reasoning is otherwise intact.
What kinds of modules could provide a qualitative reasoning improvement over humans? Brooks (1987) mentions invisibility as an essential difficulty in software engineering. Software cannot be visualized in the same way physical products can be, and any visualization can only cover a small part of the software product. Yudkowsky (2007) discusses the notion of a codic cortex designed to natively visualize code the same way the human visual cortex is evolved to natively model the world around us.
A codic cortex can be considered a special case of directly integrating complex models to a mind. Humans employ various complex external models, such as weather simulations, which are not directly integrated to our minds. We can only study a small portion of the model at a time, which makes it difficult to detect subtle errors. For better comprehension, we re-create partial models in our minds (insert some cite here), where they are directly accessible and integrated to the rest of our mind. The ability to directly integrate external models to our minds could make it possibly for us to e.g. directly pick up on all the relevant details of a weather simulation in the same way that we can very quickly pick up all the relevant details of a picture presented to us.
Well, it’s a start and better than nothing. If I were bringing in numbers here, I wouldn’t focus on counting but bring in blind mathematicians and geometry, and I’d also focus on the odd sensory modality of subitization.
For example, in the hardware section you could bring up ASICs and FPGAs as technologies that vastly speed up particular algorithms—not an option ever available to humans except indirectly as tools.
In the mind section, you could point out the ability of an upload to wirehead itself, eliminating motivation and akrasia issues. (Perhaps a separate copy of the mind could be in charge of judging when the ‘real’ mind deserves a reward for taking care of a task.)
Or you could raise the possibility of entirely new sensory modalities, like the ‘code modality’ I think Eliezer proposed in LOGI—regular humans can gain new modalities with buzzing compass belts and electrical prickles onto the tongue and whatnot, but it’d be difficult to figure out a way more direct than 2D images for code. An upload could just feed the binary bits into an appropriate area of simulated neurons and let the network figure and adapt (like in the real-world examples of new sensory modalities.)
In a previous version of the paper, I had the following paragraphs. I deleted them when I added the current explanation of mental modules because I felt these became redundant. Do you think I should add them, or parts of them, back?
Well, it’s a start and better than nothing. If I were bringing in numbers here, I wouldn’t focus on counting but bring in blind mathematicians and geometry, and I’d also focus on the odd sensory modality of subitization.