And that’s another important point: Trading recommended reading lists does nothing to sift out the truth. You can find a number of books xor articles espousing virtually any position, but part of the function of a rational argument is to present arguments that respond effectively to the other person’s points. Anyone can just read books and devise brilliant refutations of the arguments therein; the real test is whether those brilliant refutations can withstand an intelligent, rational “opponent” who is willing and able to thoroughly deconstruct it from a perspective outside of your own mind.
Vulture
(I originally had a much longer comment, but it was lost in some sort of website glitch. This is the Reader’s Digest version)
I think algorithmic complexity does, to a certain degree, usefully represent what we value about human life: uniqueness of experience, depth of character, whatever you want to call it. For myself, at least, I would feel fewer qualms about Matrix-generating 100 atom-identical Smiths and then destroying them, than I would generating 100 individual, diverse people who eacvh had different personalities, dreams, judgements, and feelings. It evens captures the basic reason, I think, behind scope insensitivity; namely, that we see the number on paper as just a faceless mob of many, many, identical people, so we have no emotional investment in them as a group.
On the other hand, I had a bad feeling when I read this solution, which I still have now. Namely, it solves the dilemma, but not at the point where it’s problematic; we can immediately tell that there’s something wrong with handing over five bucks when we read about it, and it has little to with the individual uniqueness of the people in question. After all, who should you push from the path of an oncoming train: Jaccqkew’Zaa’KK, The Uniquely Damaged Sociopath (And Part-Time Rapist), or a hard-working, middle-aged, balding office worker named Fred Jones?
It took me a while to figure out what’s so disturbing about this graph, and I’m still not sure I get it. Is it strange and unexpected that attempted life-extending drugs shorten lifespans as often as they increase them? Or is it disturbing that the drugs are very likely to simply do nothing at all?
Or does this graph only represent trials of one drug? I see a single drug mentioned specifically, but the graph is also labeled as created from a massive compilation of data from multiple sources. Could someone explain this to me?
Under the Bayesian definition, the Taoist anecdotes would be pretty weak evidence, and the Biblical accounts of Moses barely evidence at all. Under a scientific defintion, on the other hand, neither of those is evidence at all. I think that the point of this post was “can anyone find any scientific, or at least non-weak Bayesian, evidence that calorie restriction improves lifespan?”
I thought of a game for this called “Functionality Telephone”
So you divide people into pairs, and in each pair one person is the Manager, and the other person is the Designer. The Manager recieves a card with some sort of functionality pritned on it, like “Can hold a gram of water without leaking”, or “Can have pebbles thrown at it and remain standing”, or some other easily testable function. It will also have some taboo words, like (for the water example above) water, waterproof, spill, leak, etc. The Designer will have a bunch of legos or some such.
The game, as you may have guessed, is that the Manager has to give the Designer verbal instructions, without using any of the taboo words, to build something with the legos which will fulfill the functionality. Then the Manager will leave the room, and the Designer has to try to build something which follows the manager’s instructions. When the Designers are done, the Managers return to the room and reveal their functionality cards, and then they and the Designer get to watch as their “product” is tested. Fun times are had by all.
The idea here is that the Manager is forced to describe their functionality further down the abstraction ladder than the form they recieve it in, in a lucid and detailed way.
Most of those sound like irritating euphemisms, which, while more technically accurate, don’t exactly roll off the tongue (unlike cryocide, which has a decent ring to it). On the other hand, accuracy is important, and cryocide has misleadingly negative connotations. So I think out design goals should be: One word, non-technical-sounding, rolls off the tongue, neutral or at least positive connotations. So, how about…
Cryonide? (Ugh.) Suonics? (Double Ugh.) Crythanasia? (Triple Ugh.)
Maybe we should just give up on portmanteaus. Any suggestions?
*Base 37, I think you mean
I’m replying to Atorm’s disputation of Strange7′s response to Eliezer’s response to Wei Dai’s idea about using algorithmic complexity as a moral principle as a solution to the Pascal’s Mugging dilemma. If I got that chain wrong and I’m responding to some completely different discussion, then I apologize for confusing everyone and it would be nice if you could point me to the thread I’m looking for. :)
(And yes, Jaccqkew’Zaa’KK goes under the train, and he really is sociopathic rapist; I was using that thought experiment as an example of a situation where the algorithmic complexity rule doesn’t work)
So this graph does represent comiled data about a lot of different drugs, rather than just one drug?
I now realize that it is in fact base 36. Sorry about that, I was responding on autopilot by some muddied rule in my head which said “Non-decimal base → Add 1”, probably a result of remembering that base 10, for instance, only has digits up to 9. What I failed to realize, naturally, was that this included 0, coming out to ten symbols.
Sorry about that, I was wrong.
Uh… I think the fact that humans aren’t cognitively self-modifying (yet!) doesn’t have to do with our intelligence level so much as the fact that we were not designed explicitly to be self-modifying, as the SIAI is assuming any AGI would be. I don’t really know enough about AI to know whether or not this is strictly necessary for a decent AGI, but I get the impression that most (or all) serious would-be-AGI-builders are aiming for self-modification.
By my understanding, learning is basically when a program collects the data it uses itself through interaction with some external system. Self-modification, on the other hand, is when the program has direct read/write acces to its own source code, so it can modify its own decision-making algorithm directly, not just the data set its algorithm uses.
I remembering that in another discussion where the diminshing return of money was given as an issue with a thought experiment, somebody suggested that you could eliminate the effect by just stipulating that (in the case of a wager) your winnings will be donated to a highly efficient charity that feeds starving children or something.
My own thoughts: If the amount of money involved is so small that it would be worthless to any charity, just multiply everything by a sufficiently large constant. If you run out of starving children, start researching a cure for cancer, and once that’s cured you can start in on another disease, etc. Once all problems are solved, we can assume that the standard of living can be improved indefinitely.
Hmm… I don’t think humanity’s terminal values have changed very much since Benjamin Franklin (matter of fact, he was an Enlightenment figure, and the enlightenment is probably the most recent shift of terminal values in the Western world: political liberty, scientific truth, etc.) The things that I imagine would horrify him are mostly either actually bad (Global warming! Nuclear bombs!) or a result of cultural taboos or moral injucntions that have been lifted since his time (Gay marriage! Barack Obama!). This, it seems, is what we mostly mean by moral progress: The lifting of {cultural taboos/moral injunctions} which inhibit our terminal moral values.
Sorry if I was unclear, since I was jumping around a bit; five bucks is the cash demanded by the “mugger” in the original post.
My understanding was that in your comment you basically said that our current inability to modify ourselves is evidence that an AGI of human-level intelligence would likewise be unable to self-modify.
I, too, was worried about this at first, but you’ll find that http://lesswrong.com/lw/jj/conjunction_controversy_or_how_they_nail_it_down/ contains a thorough examination of the research on the conjunction fallacy, much of which involves eliminating the possibility of this error in numerous ways.
“How many alcoholics, chronic gamblers, and so on, are really incapable of helping themselves, as opposed to just being people who enjoy drinking or gambling and claim to be unable to help themselves to diminish social disapproval?”
But by self-diagnosing as an alcoholic, a person would thereby be much more likely to become the focus of deliberate social interventions, like being taken to Alcoholics Anonymous (a shining example, by the way, of well-rganied and effective social treatment of a disease) or some such. This sort of focused attention, essentially being treated as if one had a disease, I would think would be the opposite of what a hedonistic boozer would want. Would they really consider possible medical intervention a fair price to pay for slightly less disapproval from friends?
“How many alcoholics, chronic gamblers, and so on, are really incapable of helping themselves, as opposed to just being people who enjoy drinking or gambling and claim to be unable to help themselves to diminish social disapproval?”
But by self-diagnosing as an alcoholic, a person would thereby be much more likely to become the focus of deliberate social interventions, like being taken to Alcoholics Anonymous (a shining example, by the way, of well-rganied and effective social treatment of a disease) or some such. This sort of focused attention, essentially being treated as if one had a disease, I would think would be the opposite of what a hedonistic boozer would want. Would they really consider possible medical intervention a fair price to pay for slightly less disapproval from friends?
I think we all seem to be forgetting that the point of this article is to help us enage in more productive debates, in which two rational people who hold different beliefs on an issue come together and satisfy Aumann’s Agreement Theorem- which is to say, at least one person becomes persuaded to hold a different position from the one they started with. Presumably these people are aware of the relevant literature on the subject of their argument; the reason they’re on a forum (or comment section, etc.) instead of at their local library is that they want to engage directly with an actual proponent of another position. If they’re less than rational, they might be entering the argument to persuade others of their position, but nobody’s there for a suggested reading list. If neither opponent has anything to add besides a list of sources, then it’s not an argument- it’s a book club.