This made my trust in the community and my judgement of its average quality go down a LOT...
I expected almost everyone to agree with Eliezer on most important things...
Alicorn (top-poster) doesn’t agree with Eliezer about ethics. PhilGoetz (top-poster) doesn’t agree with Eliezer. Wei_Dai (top-poster) doesn’t agree with Eliezer on AI issues. wedrifid (top-poster) doesn’t agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.
I am pretty sure Yvain doesn’t agree with Eliezer on quite a few things too (too lazy to look it up now).
Generally there are a lot of top-notch people who don’t agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.
But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI’s top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.
I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples… I’m not sure what’s going on, but I’m sure that whatever it is it’s my fault and extremely unflattering to my ability as a rationalist.
How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I’m to lazy to form my own opinions! :p
We just learned that neutrinos might be accelerated faster that light in certain circumstances, while this result doesn’t give me too much pause, It certainly made me think about the possible practical consequences of successfully understanding quantum mechanics.
Fair enough. A deeper understanding of quantum mechanics would probably have huge practical consequences.
It isn’t obvious to me that figuring out whether the MWI is right is an especially good way to improve understanding of QM. My impression from LW is that MWI is important here for looking at ethical consequences.
I share that impression :) Plus its very fun to think about Everett branches and accusal trade when I pretend we would have a chance against a truly Strong AI in a box.
This is strange because I technically knew each of those individual examples… I’m not sure what’s going on,
Sounds like plain old accidental compartmentalization. You didn’t join the dots until someone else pointed out they made a line. (Admittedly this is just a description of your surprise and not an explanation, but hopefully slapping a familiar label on it makes it less opaque.)
I wrote him an email to make sure. Here is his reply:
I’ve read a lot of the sequences. Probably the bulk of them. Possibly all of them. I’ve also looked pretty actively for SIAI-related content directly addressing the concerns I’ve outlined (including speaking to different people connected with SIAI).
take Peter Thiel (SI’s top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.
IIRC Peter Thiel can’t give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.
Right. I remember the fundraising appeals about this: if Thiel donates too much, SIAI begins to fail the 501c3 regs, that it “receives a substantial part of its income, directly or indirectly, from the general public or from the government. The public support must be fairly broad, not limited to a few individuals or families.”
Alicorn (top-poster) doesn’t agree with Eliezer about ethics. PhilGoetz (top-poster) doesn’t agree with Eliezer. Wei_Dai (top-poster) doesn’t agree with Eliezer on AI issues. wedrifid (top-poster) doesn’t agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.
I am pretty sure Yvain doesn’t agree with Eliezer on quite a few things too (too lazy to look it up now).
Generally there are a lot of top-notch people who don’t agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.
But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI’s top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.
I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples… I’m not sure what’s going on, but I’m sure that whatever it is it’s my fault and extremely unflattering to my ability as a rationalist.
How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I’m to lazy to form my own opinions! :p
I just wait, especially considering that which interpretation of QM is correct doesn’t have urgent practical consequences.
We just learned that neutrinos might be accelerated faster that light in certain circumstances, while this result doesn’t give me too much pause, It certainly made me think about the possible practical consequences of successfully understanding quantum mechanics.
Fair enough. A deeper understanding of quantum mechanics would probably have huge practical consequences.
It isn’t obvious to me that figuring out whether the MWI is right is an especially good way to improve understanding of QM. My impression from LW is that MWI is important here for looking at ethical consequences.
I share that impression :) Plus its very fun to think about Everett branches and accusal trade when I pretend we would have a chance against a truly Strong AI in a box.
Sounds like plain old accidental compartmentalization. You didn’t join the dots until someone else pointed out they made a line. (Admittedly this is just a description of your surprise and not an explanation, but hopefully slapping a familiar label on it makes it less opaque.)
Holden Karnofsky has read all of the Sequences?
I wrote him an email to make sure. Here is his reply:
IIRC Peter Thiel can’t give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.
Right. I remember the fundraising appeals about this: if Thiel donates too much, SIAI begins to fail the 501c3 regs, that it “receives a substantial part of its income, directly or indirectly, from the general public or from the government. The public support must be fairly broad, not limited to a few individuals or families.”