Can you elaborate a bit on what exactly is your intention?
Specifically, is this meant to be a scale of severity categories with one example for each, or is it meant as an exhaustive list of all relevant apocalyptic scenarios put into a ranking?
Can you elaborate a bit on what exactly is your intention?
Specifically, is this meant to be a scale of severity categories with one example for each, or is it meant as an exhaustive list of all relevant apocalyptic scenarios put into a ranking?
Great idea. I will probably do a similar thing myself at some point, and it will probably look similar to yours.
The only thing I see that might be missing is advise for a scenario in which the odds of revival go down with time, creating pressure to revive you sooner rather than later. In that case your wishes may contradict with each other (since later revival could still increase the odds of living indefinitely). That seems far fetched but not entirely impossible.
Other than that, I’d say be more specific to avoid any possible misinterpretation. You never know how much bureaucracy will be involved in the process when it finally happens.
Every emotion is in your head only, so that’s not useful advise. The same argument could be made for virtually every form of social insecurity.
If I may ask—you are the same registered user who made the initial comment. Why reply to yourself? Are you multiple people using the same account?
To also offer help; this might seem incredibly obvious, but a lot of people still don’t do it: be conscious about the problem and actively make plans addressing it.
E.g. if you know ahead of time that a situation will come up where you’d feel embarrassed, make an actual calculation before of what you’d have to do to avoid it entirely. If you decide that you have to do it, maybe have a plan to minimize the embarrassment somehow (it depends on the context). None of that will solve the issue, but actively trying to find loopholes and such rather than going into situations blindly could reduce harm.
You could also consider ways to solve some instances of the problem permanently while dodging the embarrassment, e.g. make active tries to learn how to ride a bike, either on your own or with a person who’s willing and with whom you’d feel comfortable, if such a person exists.
Ah, I see. Thanks.
I actually don’t quite agree (this is the first time I found something new to criticize on one of the sequence posts).
To me, it seems like humility as discussed here is inherently a distortion, that when applied, shifts a conclusion in some way. The reason why it can be a good thing is simply that, if a conclusion is flawed, it can shift it into a better place, sort of a counter-measure to existing biases. it is as if I do a bunch of physical measurements and realize that the value I observe is usually a bit too small, so I just add a certain value to my number every time, hoping to move it closer to the correct one.
However, once I fix my measurement tools, that distortion then becomes negative. Similarly, once I actually get my rationality correct, humility will become negative. In this case, there also seems to be a general tool to get your conclusion fixed, which is to use the outside view rather than the inside view. Applying that to the engineer example:
What about the engineer who humbly designs fail-safe mechanisms into machinery, even though he’s damn sure the machinery won’t fail? This seems like a good kind of humility to me.
If the engineer used the outside view, he should know that humans are fallible and already conclude that he should spend an appropriate amount of time on fail-safe mechanics. If he then applied humility on top of it, thus downplaying his efforts despite having used the outside-view, it should lead him to worry/work on it more than necessary.
Of course, you could reason that in my example, applying the outside view is itself a form of applying humility. My point is simply that even proper humility doesn’t seem to cover any new ground. It’s not “part of rationality,” so to speak. It’s simply a useful tool, practically speaking, to apply when you haven’t conquered your biases yet. In that sense, I would argue that, ultimately, the correct way to use humility is not at all / automatically without doing anything.
Is there a relevant difference in how much the eventual winner will incorporate AI safety measures? Or do you think it is merely an issue of actually solving the [friendly AI] problem, and once it is solved, it will surely be used?
I think this is the first article in a long time that straight up changed my opinion in a significant way. I always considered empathy a universally good thing – in all forms. In fact I held it as one of the highest values. But the logic of the article is hard to argue with.
I still tentatively disagree that it [emotional empathy] inherently bad. Following what I read, I’d say it’s harmful because it’s overvalued/misunderstood. The solution would be to recognize that it’s an egoistical thing – as I’m writing this I can confirm that I think this now. Whereas cognitive empathy is the selfless thing.
Doing more self-analysis, I think I already understood this on some level, but I was holding the concept of empathy in such high regards that I wasn’t able to consciously criticize it.
I think this article is something that people outside of this community really ought to read.
My observation is that people who are smart generally try to live more ethically, but usually have skewed priorities; e.g. they’ll try to support the artists they like and to be decent in earning their money, when they’d fair better just worrying less about all that and donating a bit to the right place every month. Quantitative utility arguments are usually met with rejection.
LW’s, on the other hand, seem to be leaning in that direction anyway. Though I’m fairly new to the community, so I could be wrong.
I wouldn’t show it to people who lack a “solid” moral base in the first place. They probably fair better in keeping every shred of empathy they have (thinking of how much discrimination still exists today).
100% doesn’t work because then you starve. If I re-formulate your question to “is there any rebuttal to why we don’t donate way more to charity than we currently do” then the answer depends on your belief system. If you are utilitarian, the answer is definitive no. You should spend way more on charity.
Er… no. Utilitarianism prohibits that exact thing by design. That’s one of its most important aspects.
Read the definition. This is unambiguous.
As is, every level is only useful insofar as it helps with lower levels. But Level 1 still isn’t the ultimate goal. You don’t live to do the dishes, and not – at least not necessarily – to work. I think this model should be extended by Level 0 actions, which are things that directly cause happiness (or, alternatively, whatever else your ultimate goal is in life). Level 1 is, I think solely, useful to provide you (or others) with more opportunities to do Level 0. Level 2 then is useful to help you with Level 1, etc, so everything stays the same. Your thoughts about how people do too few / too many actions on a certain level is also directly applicable to Level 0.
What is different is that all Level n actions now also have a Level 0 component, but I think that’s useful to have since it corresponds to a real thing in the world that has previously not been covered. As an example, if you can do a Level 2 & 0 action (such as reading up on computer science which you enjoy doing) instead of a pure Level 0 action, then that should always be a good idea, even if there is a risk of low connectivity back to Levels 1 and 0.
Maybe I misunderstand something, but why would less complexity imply higher frequency when those being capable of experiencing either will generally strive for happiness?
Well, fuck.
Is there a place where Yudkowsky has talked about consciousness? I have found the Zombie Series, but that’s not quite what I’m looking for. I’m more curious how he thinks it works more than why Zombies don’t work.
Also, is there a place where Yudkowsky has talked about Climate Change?
I’ve looked for both, but I couldn’t find either.
This may be a naive and over-simplified stance, so educate me if I’m being ignorant--
but isn’t promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn’t we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.
Feel free to just provide a link if the argument has been discussed before.
Should probably have been posted in the open thread (not meant as a reproach)
Those of us who have read MoR already knew!
The site is basically founded on the sequences. If you reject them, then why bother with LW (which is your choice anyway, but referring to them should be expected), and if you don’t reject them, then why complain about them being brought up?
To me it is immediately obvious that torture is preferable. Judging my the comments I’m in the minority.