Paper draft: Relative advantages of uploads, artificial general intelligences, and other digital minds
http://www.xuenay.net/Papers/DigitalAdvantages.pdf
Abstract: I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. The categories are hardware advantages, self-improvement advantages, co-operative advantages and human handicaps. The shape of hardware growth curves as well as the ease of modifying minds are found to be some of the core influences on how quickly a digital mind may take advantage of these factors.
Still a bit of a rough draft (could use a bunch of tidying up, my references aren’t in a consistent format, etc.), but I wanted to finally get this posted somewhere public so I could get further feedback.
- [paper draft] Coalescing minds: brain uploading-related group mind scenarios by 29 Sep 2011 15:51 UTC; 13 points) (
- 3 Sep 2012 8:55 UTC; 2 points) 's comment on [META] Karma for last 30 days? by (
where’d you run across this? I thought I was the only one on LW who knew it:
oh, and as far as algorithmic improvement goes, integer factorization is apparently even more impressive than the linear programming improvements, but I haven’t been able to re-find my reference for that
this may not matter since you’re submitting it to Goertzel, but for a more general academic audience, I think Chalmer’s singularity paper would be much better than Yudkowsky
also, your human biases section could use examples of zany computer solutions. eg http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2fsh
Overall, the paper seems kind of lacking in meat to me.
It was on Slashdot.
Good point, I’d forgotten about Chalmers. I’ll work in a couple of cites to him.
Those are good examples, I’ll work in some of that and other examples besides. Thanks.
For example, in the hardware section you could bring up ASICs and FPGAs as technologies that vastly speed up particular algorithms—not an option ever available to humans except indirectly as tools.
In the mind section, you could point out the ability of an upload to wirehead itself, eliminating motivation and akrasia issues. (Perhaps a separate copy of the mind could be in charge of judging when the ‘real’ mind deserves a reward for taking care of a task.)
Or you could raise the possibility of entirely new sensory modalities, like the ‘code modality’ I think Eliezer proposed in LOGI—regular humans can gain new modalities with buzzing compass belts and electrical prickles onto the tongue and whatnot, but it’d be difficult to figure out a way more direct than 2D images for code. An upload could just feed the binary bits into an appropriate area of simulated neurons and let the network figure and adapt (like in the real-world examples of new sensory modalities.)
In a previous version of the paper, I had the following paragraphs. I deleted them when I added the current explanation of mental modules because I felt these became redundant. Do you think I should add them, or parts of them, back?
Well, it’s a start and better than nothing. If I were bringing in numbers here, I wouldn’t focus on counting but bring in blind mathematicians and geometry, and I’d also focus on the odd sensory modality of subitization.
Substitute
increase the morality rate → ,increase the mortality rate
Thanks, I’ll fix that.
I’m having a lot of trouble understanding the second paragraph in section 2.1.2, especially by the sentences “Amdahl’s law assumes that the size of the problem stays constant as the number of processors increases, but Gustafson (1988) notes that in practice the problem size scales with the number of processors.” Can you expand on what you mean here?
Edit: Also there’s a typo in 4.1- “practicioners”.
I think the point is that when you increase the data set, you then expose more work for the parallelism to handle.
If I have a 1kb dataset and I have a partially parallel algorithm to run on it, I will very quickly ‘run out of parallelism’ and find that 1000 processors are as good as 2 or 3. Whereas, if I have a 1pb dataset, same data and same algorithm, I will be able to add processors for a long time before I finally run out of parallelism.
gwern’s explanation is right. Gustafson’s law. I’ll clarify that.
Are you trying to keep this intentionally conservative and subdued? Because I found the parts about improvements to human uploads rather… very unimaginative.
(then again, the circumstances around me imagining that might’ve been less… err… very conductive to imagining things as a human. If you know what I mean. I’m a bit disappointed because i didn’t learn anything new from this article. )