What’s the chance that AI doesn’t have that much of an impact on the world by 2100?
Honestly, that one belongs in the settled-questions bin next to theism. Making intellectual progress requires having spaces where the basics can be taken for granted, for a definition of “the basics” that’s for people trying to contribute at the intellectual frontier, rather than for the human population at large.
How well does the epistemic process on LW work? Are there any changes you would make to LW’s epistemic processes?
This is never going to be perfect, anywhere, and people should always be on the lookout for epistemic problems. But there’s a pretty strong outside-view reason to think LW’s epistemics will outperform those of the rest of the world: it’s full of people investing heavily in improving their epistemics, and having abstract discussions about them.
What’s the chance that we do have massive impacts, but alignment is so easy that standard ML techniques work?
I think this is the core question, but is a slightly incorrect framing. I also think this is the core point of disagreement between the AGI Ruin perspective and the AI Accelerationist perspective.
How hard alignment is, is a continuous variable, not a boolean. The edges of the range are “it’s borderline impossible to solve before time runs out” and “it’s trivial and will solve itself”. The same applies to framing specific research as capabilities research or as alignment research: a lot of things live in the border research, where it makes more sense to think in terms of things having a ratio between those two.
I don’t think the people leading and working in AGI research programs think alignment is easy. I do think that they think that it’s easier, by a large enough amount to change their view of the cost-benefit of accelerating the timelines. And because this is a continuous variable with a lot of inputs, expanding it out doesn’t yield a single large crux that distinguishes the two camps, but rather a large number of smaller, unshared differences in belief and intuition.
(I lean more towards the “it’s hard” side, but am decidedly not on the edge of the scale; I think it’s likely to be close enough that individual insights into alignment, and modest changes to research timelines, could potentially be decisive. I also think that my difficulty-estimation could move substantially in either direction without changing my beliefs about the correct course of action, due to a “playing to outs” argument.)
Honestly, that one belongs in the settled-questions bin next to theism. Making intellectual progress requires having spaces where the basics can be taken for granted, for a definition of “the basics” that’s for people trying to contribute at the intellectual frontier, rather than for the human population at large.
Strong downvoted for tone on this: the reason it belongs in the settled bin is because it’s really easy to answer the question. Simply, AI has already had an enormous impact, and more of the same would be pretty damn world-changing.
Honestly, that one belongs in the settled-questions bin next to theism. Making intellectual progress requires having spaces where the basics can be taken for granted, for a definition of “the basics” that’s for people trying to contribute at the intellectual frontier, rather than for the human population at large.
This is never going to be perfect, anywhere, and people should always be on the lookout for epistemic problems. But there’s a pretty strong outside-view reason to think LW’s epistemics will outperform those of the rest of the world: it’s full of people investing heavily in improving their epistemics, and having abstract discussions about them.
I think this is the core question, but is a slightly incorrect framing. I also think this is the core point of disagreement between the AGI Ruin perspective and the AI Accelerationist perspective.
How hard alignment is, is a continuous variable, not a boolean. The edges of the range are “it’s borderline impossible to solve before time runs out” and “it’s trivial and will solve itself”. The same applies to framing specific research as capabilities research or as alignment research: a lot of things live in the border research, where it makes more sense to think in terms of things having a ratio between those two.
I don’t think the people leading and working in AGI research programs think alignment is easy. I do think that they think that it’s easier, by a large enough amount to change their view of the cost-benefit of accelerating the timelines. And because this is a continuous variable with a lot of inputs, expanding it out doesn’t yield a single large crux that distinguishes the two camps, but rather a large number of smaller, unshared differences in belief and intuition.
(I lean more towards the “it’s hard” side, but am decidedly not on the edge of the scale; I think it’s likely to be close enough that individual insights into alignment, and modest changes to research timelines, could potentially be decisive. I also think that my difficulty-estimation could move substantially in either direction without changing my beliefs about the correct course of action, due to a “playing to outs” argument.)
Strong downvoted for tone on this: the reason it belongs in the settled bin is because it’s really easy to answer the question. Simply, AI has already had an enormous impact, and more of the same would be pretty damn world-changing.
Agree voted.