Interesting! Different experiences.
I do want to make it clear that people who are X often acknowledge that they are X, but don’t intensely worry about it. E.g. a friend who knows he’s abrasive, knows his life would be better if he were less abrasive on the margin, but doesn’t have the emotional reaction “oh god, am I being abrasive?” in the middle of social interactions.
On the other hand, I was undiagnosed (and accordingly untreated) bipolar type 2 at the time of that comment, so my results are not generalizable. My hypomanic self wrote checks that my depressive self couldn’t cash.
That’s a great point. [Getting more pundits to make predictions at all] is much more valuable than [more accurately comparing pundits who do make predictions] right now, to such an extent that I now doubt whether my idea was worthwhile.
Meanwhile, Biden continues to double down on underpromising to maximize the chances of being able to claim overdelivery on all fronts.
Besides the incentives (cf. the Scotty Factor), it’s an important safety valve against the Planning Fallacy.
I have to disagree with you there. Thanks to my friends’ knowledge, I stopped my parents from taking a cross-country flight in early March, before much of the media reported that there was any real danger in doing so. You can’t wave off the value of truly thinking through things.
But don’t confuse “my model is changing” with “the world is changing”, even when both are happening simultaneously. That’s my point.
One problem: a high price can put more stress on a person, and raising the price further won’t fix that!
For instance, say that you leave a fic half-finished, and someone offers a million dollars to MIRI iff you finish it. Would you actually feel cheerful and motivated, or might you feel stressed and avoidant and guilty about being slow, and have a painful experience in actually writing it?
(If you’ve personally mastered your relevant feelings, I think you’d still agree that many people haven’t.)
I don’t know what to do in that case.
If this post is selected, I’d like to see the followup made into an addendum—I think it adds a very important piece, and it should have been nominated itself.
I think this post (and similarly, Evan’s summary of Chris Olah’s views) are essential both in their own right and as mutual foils to MIRI’s research agenda. We see related concepts (mesa-optimization originally came out of Paul’s talk of daemons in Solomonoff induction, if I remember right) but very different strategies for achieving both inner and outer alignment. (The crux of the disagreement seems to be the probability of success from adapting current methods.)
Strongly recommended for inclusion.
It’s hard to know how to judge a post that deems itself superseded by a post from a later year, but I lean toward taking Daniel at his word and hoping we survive until the 2021 Review comes around.
I can’t think of a question on which this post narrows my probability distribution.
The content here is very valuable, even if the genre of “I talked a lot with X and here’s my articulation of X’s model” comes across to me as a weird intellectual ghostwriting. I can’t think of a way around that, though.
That being said, I’m not very confident this piece (or any piece on the current state of AI) will still be timely a year from now, so maybe I shouldn’t recommend it for inclusion after all.
Ironically enough for Zack’s preferred modality, you’re asserting that even though this post is reasonable when decoupled from the rest of the sequence, it’s worrisome when contextualized.
I agree about the effects of deep learning hype on deep learning funding, though I think very little of it has been AGI hype; people at the top level had been heavily conditioned to believe we were/are still in the AI winter of specialized ML algorithms to solve individual tasks. (The MIRI-sphere had to work very hard, before OpenAI and DeepMind started doing externally impressive things, to get serious discussion on within-lifetime timelines from anyone besides the Kurzweil camp.)
Maybe Demis was strategically overselling DeepMind, but I expect most people were genuinely over-optimistic (and funding-seeking) in the way everyone in ML always is.
This is a retroactively obvious concept that I’d never seen so clearly stated before, which makes it a fantastic contribution to our repertoire of ideas. I’ve even used it to sanity-check my statements on social media. Well, I’ve tried.
This reminds me of That Alien Message, but as a parable about mesa-alignment rather than outer alignment. It reads well, and helps make the concepts more salient. Recommended.
This makes a simple and valuable point. As discussed in and below Anna’s comment, it’s very different when applied to a person who can interact with you directly versus a person whose works you read. But the usefulness in the latter context, and the way I expect new readers to assume that context, leads me to recommend it.
I liked the comments on this post more than I liked the post itself. As Paul commented, there’s as much criticism of short AGI timelines as there is of long AGI timelines; and as Scott pointed out, this was an uncharitable take on AI proponents’ motives.
Without the context of those comments, I don’t recommend this post for inclusion.
I’ve referred and linked to this post in discussions outside the rationalist community; that’s how important the principle is. (Many people understand the idea in the domain of consent, but have never thought about it in the domain of epistemology.)