Donated $500.
Kutta
I sent 640 dollars.
Survey taken.
This idea that whenever something evil happens someone particular can be blamed and punished for it, in life and in politics is hopeless.
-- Hayao Miyazaki
If anything of the classical supernatural existed, it would be a branch of engineering by now.
-- Steve Gilham
- 26 Dec 2010 13:18 UTC; 3 points) 's comment on Best of Rationality Quotes 2009/2010 by (
Is that really just it? Is there no special sanity to add, but only ordinary madness to take away?
I think this is the primary factor. I’ve got a pretty amusing story about this.
Last week I met a relatively distant relative, a 15 year old guy who’s in a sports oriented high school. He plays football, has not much scientific, literary or intellectual background, and is quite average and normal in most conceivable ways. Some TV program on Discovery was about “robots”, and in a shortly unfolding 15 minute spontaneous conversation I’ve managed to explain him the core problems of FAI, without him getting stuck at any points of my arguments. I’m fairly sure that he had no previous knowledge about the subject.
First I made a remark in connection to the TV program’s poetic question about what if robots will be able to get most human work done; I said that if robots get the low wage jobs, humans would eventually get paid more on average, and the problem is only there when robots can do everything humans can and somehow end up actually doing all those things.
Then he asked if I think they’ll get that smart, and I answered that it’s quite possible in this century. I explained recursive self-improvement in two sentences, to illustrate the reasons why they could potentially get very, very smart in a small amount of time. I talked about the technology that would probably allow AIs to act upon the world with great efficiency and power. Next, he said something like “that’s good, wouldn’t AI’s would be a big help, like, they will invent new medicine?” At this point I was pretty amused. I assured him that AIs indeed have great potentials. I talked then very shortly about most basic AI topics, providing the usual illustrations like Hollywood AIs, smiley-tiled solar systems and foolish programmers overlooking the complexity of value. I delineated CEV in a simplified “redux” manner, focusing on the idea that we should optimally just extract all relevant information from human brains by scanning them, to make sure nothing we care about is left out. “That should be a huge technical problem, to scan that much brains”, he said.
And now:
“But if the AI gets so potent, would not it be a problem anyway, even if it’s perfectly friendly, that it can do everything much better than humans, and we’ll get bored?”
“Hahh, not at all. If you think that getting all bored and unneeded is bad, then it is a real preference inside your head. It’ll be taken into account by the AI, and it will make sure it’ll not pamper you excessively.”
“Ah, that sounds pretty reasonable”.
Now, all of this happened in the course of roughly 15 minutes. No absurdity heuristic, no getting lost, no objections; he just took everything I said at face value, assuming that I’m more knowledgeable on these matters, and I was in general convinced that nothing I explained was particularly hard to grasp. He asked relevant questions and was very interested in what I said.
Some thoughts why this was possible:
The guy belongs to a certain social strata in Hungary, namely to those who newly entered the middle class by free entrepreneurship that became a possibility after the country switched to capitalism. At first, the socialist regime repressed religion and just about every human rights, then eased up, softened, and became what’s known as the “happiest barrack”. People became unconcerned with politics (which they could not influence) and religion (which was though of as a highly personal matter that should not be taken to public), they just focused on their own wealth and well-being. I’m convinced that the parents of the guy care zero about any religion, the absence of religion, doctrine, ideology or whatever. They just work to make a living and don’t think about lofty matters, leaving their son ideologically perfectly intact. Just like my own parents.
Actually, AI is not intrinsically abstract or hard to digest; my interlocutor knew what an AI is, even if from movies, and probably watched just enough Discovery to have a sketchy picture about future technologies. The mind design space argument is not that hard (he had known about evolution because it’s taught in school. He immediately agreed that AIs can be much smarter than humans because if we wait a million years, maybe humans can also become much smarter, so it’s technically possible), and the smiley-tiled solar system is an entertaining and effective explanation about morality. I think that Eliezer has put extreme amounts of effort to maximize the chance that his AI ideas will get transmitted even to people who are primed or biased against AI or at risk of motivated skepticism. So far, I’ve had great success using his parables, analogues and ways of explanation.
My perceived status as an “intellectual” made him accept my explanations at face value. He’s a football player in a smallish countryside city and I’m a serious college student in the capital city (it’s good he doesn’t know how lousy a student I am). Still, I do not think this was a significant factor. He probably does not talk about AI among football players, but being a male he has some basic interests in futuristic or gadgety subjects.
In the end, it probably all comes down to lacking some specific ways of craziness. Cryonics seemed normal on that convention Eliezer attended, and I’m sure every idea that is epistemically and morally correct can in principle be a so-called normal thing. Besides this guy, I’ve even had full success lecturing a 17 year old metal drummer on AI and SIAI—and he was situated socioeconomically very similarly to the first guy, and neither he had any previous knowledge.
I donated 250$.
Update: No, I apparently did not. For some reason the transfer from Google Checkout got rejected, and now PayPal too. Does anyone have an idea what might’ve gone wrong? I’ve a Hungarian bank account. My previous SI donations were fine, even with the same credit card if I recall correctly, and I’m sure that my card is still prefectly valid.
Took all of them.
Many people equate tolerance with the attitude that every belief is equally true, and that we should all simply accept this fact and go our separate ways. But I view tolerance as the willingness to come together, to face one another in the same room and hack at each other with claw hammers until the truth finally trickles out from the blood and the tears.
-- Raving Atheist, found via the Black Belt Bayesian blog (props to Steven)
I was rather disappointed by the story; it struck me as a regular conversion, driven by positive affect, social reinforcement, fuzzy feelings, motivated cognition, and characterized by a profound lack of truth-seeking. I expected something more unique or something strangely appealing.
Forget Jesus. The stars died so that you could be here today.
The correct question to ask about functions is not „What is a rule?” or „What is an association?” but „What does one have to know about a function in order to know all about it?” The answer to the last question is easy – for each number x one needs to know the number f(x) (…)
– M. Spivak: Calculus
It’s neither our economy or our multimedia that I’m most concerned about, but whether the kids are lively and in good shape. I mean, as long as the people are doing fine it doesn’t matter if the nation is in poverty.
-- Hayao Miyazaki
Agreed; I’d personally like if a planned schedule for major grants was disclosed regularly, maybe annually.
Anyway, I donated 500 USD.
My gut level ingrained belief that people on the Internet don’t die is badly shaken.
I prefer the old map-territory themed header over the new design.
GEB is great as many things; as an introduction to formal systems, self reference, several computer science topics, Gödel’s first Incompleteness Theorem, and other stuff. Often it is also a unique and very entertaining hybrid of art and nonfiction. Without denying any of those merits, the book’s weakest point is actually the core message, quoted in OP as
GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter… GEB approaches [this question] by slowly building up an analogy that likens inanimate molecules to meaningless symbols, and further likens selves… to certain special swirly, twisty, vortex-like, and meaningful patterns that arise only in particular types of systems of meaningless symbols.
What Hofstadter does is is the following: he identifies self-awareness and self-reference as core features of consciousness and/or intelligence, and he embarks on a long journey across various fields in search of phenomena that also has something to do with self-reference. This is some kind of weird essentialism; Hofstadter tries to reduce extremely high-level features of complex minds to (superficially) similar features that arise in enormously simpler formal and physical systems. Hofstadter doesn’t believe in ontologically fundamental mental entities, so he’s far from classic essentialism, yet he believes in very low level “essences” of consciousness that percolate up to high-level minds. This abrupt jumping across levels of organizations reminds me a bit of those people who try to derive practical everyday epistemic implications from the First Incompleteness Theorem (or get dispirited because of some implied “inherent unknowability” of the world).
Now, to be fair, GEB considers medium levels of organization in its two chapters on AI, but GEB’s far less elaborate on those matters than on formal systems, for instance. The AI chapters are also the most outdated now, and even there Hofstadter’s not really trying to do any noteworthy reduction of minds but instead briefly ponders then-contemporary AI topics such as Turing tests, computer chess, SHRDLU, Bongard problems, symbol grounding, etc.
To be even fairer, valid reduction of high-level features of human minds is extremely difficult. Ev-psych and Cognitive Science can do it occasionally, but they don’t yet attempt reduction of general intelligence and consciousness itself. It is probably understandable that Hofstadter couldn’t see that far ahead into the future of cogsci, evpsych and AI. Eliezer Yudkowsky’s Level of Organization in General Intelligence is the only reductionist work I know that tries to wrestle all of it at once, and while it is of course not definitive or even fleshed-out, I think it represents the kind of mode of thinking that could possibly yield genuine insights about the mysteries of consciousness. In contrast, GEB never really enters that mode.
I definitely think there is great art out there that was solely designed to give people what they want; in film, someone like Chaplin comes to mind. I mean, giving people what they want is an art unto itself, but I think the real challenge in that method is finding a way to give them what they want while giving them more.
-- Jonathan Henderson
Indeed, Princess Mononoke is one of the least preachy eco-movies ever made, although I have a feeling that its main focus is actually not on environmentalism but on conflict resolution. To quote Miyazaki (from memory, from an awesome documentary/backstage series about Mononoke), the film is to “illustrate adult ways of thinking about issues”.
The impetus for posting these Miyazaki quotes was the movie watching streak I went on recently. I’ve covered all of his movies except Castle of Cagliostro. I also read the Nausicaa manga, and its ending significantly upset me, to such extent that I think I will write a gratuitous Fix Fic that alters the ending to my pleasure. It upset me because nearing the ending Miyazaki constructs a pretty coherent and sensible transhumanist stance of dealing with the in-universe world and its problems, and then utterly demolishes that stance in the finale. Without going into specifics, the protagonist chooses an option that significantly increases the chance that humanity goes extinct in order to a) suspend other-optimizing by (most likely benign, maybe malicious) external forces b) eliminate medium term technological risks of moderate severity.
I think Miyazaki did it to sound deep and because of some underlying deathism. The tragedy of it is that Miyazaki is not a bit stupid, probably an atheist, averts romanticized environmentalism and conservatism all the time and espouses the “uncaring universe” viewpoint. Also, he is a genuinely good-willed guy and a masterclass craftsman and artist. His films reliably make me tear up. Still, he undeniably is tangled up in the head to some extent. In the Nausicaa manga he constructs the transhumanist viewpoint a lot more coherently and logically than the viewpoint of the heroine; poor Nausicaa actually sounds there like a foil. Which is a pity because Nausicaa is a rare example of an extremely idealized main character who manages to avoid being bland and Mary Sue-ish. Because of the ending she goes from “awesome beacon of light and hope” to “she who screwed up the future”.
I hope you excuse my rant about a manga that is probably read by few people; I think it has some relevance to LW as a failure-of-rationality case study. Aside from the ending it is also an excellent piece of art that I wholeheartedly recommend.
I’m eagerly waiting the new Cosmopolitan cover with the line “TDT: THE HOTTEST NEW WAY TO WASHBOARD ABS”.