Thank you. I didn’t phrase my question very well but what I was trying to get at was whether making a friendly AGI might be, by some measurement, orders of magnitude more difficult than making a non-friendly one.
Apprentice
I claim the bet is fair if both players expect to make the same profit on average.
I like this idea. As you say, it’s not the only way to define it but it does seem like a very reasonable way. The two players have come upon a situation which seems profitable to both of them and they simply agree to “split the profit”.
Can you talk about your specific field in linguistics/philology?
I’ve mucked about here and there including in language classification (did those two extinct tribes speak related languages?), stemmatics (what is the relationship between all those manuscripts containing the same text?), non-traditional authorship attribution (who wrote this crap anyway?) and phonology (how and why do the sounds of a word “change” when it is inflected?). To preserve some anonymity (though I am not famous) I’d rather not get too specific.
what are the main challenges?
There are lots of little problems I’m interested in for their own sake but perhaps the meta-problems are of more interest here. Those would include getting people to accept that we can actually solve problems and that we should try our best to do so, Many scholars seem to have this fatalistic view of the humanities as doomed to walk in circles and never really settle anything. And for good reason—if someone manages to establish “p” then all the nice speculation based on assuming “not p” is worthless. But many would prefer to be as free as possible to speculate about as much as possible.
Do you have a stake/an opinion in the debates about the Chomskian strain in syntax/linguistics in general?
Yes. I think the Chomskyan approach is based on a fundamentally mistaken view of cognition, akin to “good old fashioned artificial intelligence”. I hope to write a top-level post on this at some point. But I’ll say this for Chomsky: He’s not a walk-around-in-circles obscurantist. He’s a resolutely-march-ahead kind of guy. A lot of the marching was in the wrong direction, but still, I respect that.
Back when you joined Wikipedia, in 2004, many articles on relatively basic subjects were quite deficient and easily improved by people with modest skills and knowledge. This enabled the cohort that joined then to learn a lot and gradually grow into better editors. This seems much more difficult today. Is this a problem and is there any way to fix it? Has something similar happened with LessWrong, where the whole thing was exciting and easy for beginners some years ago but is “boring and opaque” to beginners now?
What probability would you assign to this statement: “UFAI will be relatively easy to create within the next 100 years. FAI is so difficult that it will be nearly impossible to create within the next 200 years.”
You can ask me things if you like. At Reddit, some of the most successful AMAs are when people are asked about their occupation. I have a PhD in linguistics/philology and currently work in academia. We could talk about academic culture in the humanities if someone is interested in that.
Good point on majority voting. It matters a lot whether a comment has 18 upvotes and 14 downvotes or 14 upvotes and 18 downvotes. So a relatively narrow majority on polarized subjects can give you important control over the conversation.
- Jan 12, 2014, 9:01 AM; 0 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
Correct, but then you shouldn’t handwave into existence an assertion which is really at the core of the dispute.
The argument I am trying to approach is about proposals which make sense under the assumption of little or no relevant technological development but may fail to make sense once disruptive new technology enters the picture. I’m assuming the tree plan made sense in the first way—the cost of planting and tending trees is such and such, the cost of quality wood is such and such and the problems with importing it (our enemies might seek to control the supply) are such and such. Other projects we could spend the same resources on have such and such cost benefit-evaluations of their own. And so on and so forth. In this thought experiment you could assume a very sophisticated analysis which comes up smelling like roses. The only thing it doesn’t take into account is disruptive new technology. That’s the specific issue I’m trying to address here so that’s why I’m willing to assume all the other stuff works for the sake of argument.
In actual history, maybe the tree plan never even made any sense to begin with—maybe wood was cheap and plentiful and planting the oak trees was difficult and expensive. For all I know the whole thing was a ridiculous boondoggle which didn’t make sense under any assumption. But that’s just an uninteresting case which need not detain us.
For the sake of argument I’m assuming the plan made prima facie sense and was only defeated by technological developments. Sufficiently familiarizing myself with the state of affairs in 1830s Sweden to materially address the question would, I think, be excessively time-consuming.
As usual, gwern has made a great comment. But I’m going to bite the bullet and come out in favor of the tree plan. Let’s go back to the 1830s.
My fellow Swedes! I have a plan to plant 34,000 oak trees. In 120 years we will be able to use them to build mighty warships. My analysis here shows that the cost is modest while the benefits will be quite substantial. But, I hear you say, what if some other material is used to build warships in 120 years? Well, we will always have the option of using the wood to build warships and if we won’t take that option it will be because some even better option will have presented itself. That seems like a happy outcome to me. And wood has been useful for thousands of years—it will surely not be completely obsolete in a century. We could always build other things from it, or use it for firewood or designate the forest as a recreational area for esteemed noblemen such as ourselves. Or maybe the future will have some use for forests we cannot yet anticipate [carbon sequestration]. I don’t see how we can really go wrong with trees.
Back to the present. I’m concerned with avoiding disasters. “The benefits of this long-term plan were not realized because something even better happened” is only a disaster if the cost of the plan was disastrous. Of course, some people argue that the costs of addressing some of Dr. Jubjub’s problems are disastrous and that’s something we can discuss on the merits.
I hadn’t thought of this either! It does sound like fun to hunt with the group.
The distinction you are making between robustness and resilience was not previously familiar to me but seems useful. Thank you.
Obviously, “no significant technological advances” is a basically impossible scenario. I just mean it as a baseline. If you’re able to handle techno-stagnation in all domains you’re able to handle any permutation of stagnating domains.
Chris’ comment has, to be sure, around 18 downvotes but it also has around 14 upvotes, so many people probably agree with him.
I fully agree.
Thank you, I hadn’t considered that viewpoint.
I actually suspect we have too much sting rather than too little. Compare with this discussion. Furthermore, most of Eliezer’s Facebook posts would make good discussion posts or open-thread comments but he posts them there rather than here. I don’t know why but maybe he finds it less stressful to post in a system where there are only upvotes and no downvotes.
Also compare with this Oatmeal comic: “How I feel after reading 1,000 insightful, positive comments about my work and one negative one: The whole internet hates me :(” Obviously an exaggeration for effect but I do think most people need a very high ratio of positive to negative feedback to feel good about what they’re doing. I admit I do. Many of you, of course, are made of sterner stuff, I don’t dispute that.
I don’t have the expertise to predict anything of interest about future developments in solar technology. My general inclination is simply that we should have plans that do not lead to disaster if hoped-for technological advances fail to materialize. If we could make our civilization robust enough that it could continue to function for an indefinite time without any significant technological advances, that would be awesome.
It was, in part. But I certainly also had climate change in mind, where I’ve argued the Jubjub case for years with my friends. I’ve also seen the “Future tech will make your concerns irrelevant” viewpoint in discussions of resource depletion and overpopulation.
The question is why should we care about slithy toves? How high is the utility of protecting them? You need to answer those questions to get me to care about slithy toves.
In my parable, the two scientists agree that slithiness is important. If I were to convince you of it we would of course have to exit the parable and discuss some particular real world problem on the merits.
It depends on what you mean with business-as-usual.
Which in turn depends on the particular Jubjub problem we are discussing. If it’s global warming, for example, then developments in energy technology will be important.
I’m by no means insisting on that. Of course you can hedge your bets.
Okay, I’ll bite. Do you think any part of what MIRI does is at all useful?