Re: Tyler’s comment on “philosophical work” on x-risk reduction being largely a waste of time.
I’m not sure what he means by “philosophical work” here, but if he means “broad strategic work of the type sometimes done at FHI and MIRI,” well, the whole point of that work is to help answer questions exactly like the one you’re conflicted about here: whether OpenWorm is good or bad differential technological development. It’s precisely because of such work at FHI and MIRI that we have the concept of “differential technological development” in the first place, and that we have a collection of arguments for and against different kinds of differential technological development, even if the answers aren’t yet clear.
Before one country invades another, or cuts or supplies $1B in funding for some project, it would be nice to know whether doing so would be good or bad. That’s why Teller et al. studied the question of whether an atom bomb could ignite the atmosphere, and it’s why FHI and MIRI are doing much of the research we do.
I agree with Luke. It’s funny that Tyler, a pundit, says that pundits are useless for reducing existential risk.
Funding concrete projects is often relatively easy. People can see it and get excited about it. Asking to fund higher-level research is harder.
Concrete work is what governments do. OpenWorm is not going to compete with Obama’s $3 billion BRAIN Initiative. Pundits make these issues political, raise awareness, and thereby lead to huge amounts of funding down the road.
As for the question of whether to favor WBE: I’d be nervous about it. It could also accelerate non-WBE AI through spillover technology, enhancing interest in general in these topics, etc. I don’t have a clear opinion, but the fact that the question is so hard suggests to me that this isn’t the most cost-effective place to push. There are many other donation targets where the benefits clearly outweigh the risks.
Before one country invades another or cuts or supplies $1B in funding for some project, it would be nice to know whether doing so would be good or bad.
Even leaving the definitions of “good” and “bad” aside, can you know for a sufficiently long time horizon?
Five-year consequences we can get a reasonable forecast for. Hundred-year consequences, I submit no one has the slightest clue about.
I like Brian Tomasik’s case for trying to increase reflectiveness in the general population. I would expect that increasing reflectiveness in the general population would, if anything, be self-reinforcing; I’d be a bit surprised to be a “reflectiveness backlash” where people would decide they wanted to start being very unreflective as a result of reflectiveness being promoted too strongly. So increasing reflectiveness would seem to me to be an intervention that is pretty likely to steer humanity in a good direction, or at the very least, make it so that wherever we are in 10 years, we’ll have a better distribution of outcomes in front of us and factors to lean on.
My guess is there are other interventions that also fall in to this category, e.g. improving the quality of political discourse and generally increasing peoples’ rationality. Basically things that would prepare society to better deal with a broad range of tough situations that we might face 100 years out.
trying to increase reflectiveness in the general population
I don’t know about that. First, I’m automatically suspicious of arguments which go “General population should be more like me!” and, truth be told, intellectuals tend to be rather fond of such arguments.
Second, reflectiveness is like narcissism, it’s just that instead of focusing on your body you focus on your mind instead. I am not convinced it falls into the “the more the better” category.
Third, the suggested ways of going about it are all very handwavy and wishy-washy.
improving the quality of political discourse and generally increasing peoples’ rationality.
This is tautology—“we will improve things by improving things”. Try tabooing words like “better” or “improve”. What specific, concrete, practical changes would you make to the political discourse? Why do you think these changes will turn out to have positive consequences in a hundred years?
First, I’m automatically suspicious of arguments which go “General population should be more like me!” and, truth be told, intellectuals tend to be rather fond of such arguments.
Second, reflectiveness is like narcissism, it’s just that instead of focusing on your body you focus on your mind instead. I am not convinced it falls into the “the more the better” category.
That sounds like an argument from analogy to me. You’re not describing any causal pathway by which reflectiveness makes the world a worst case. You’re saying “reflectiveness looks vaguely like this other thing [which is actually totally different], and people to seem to think that thing is bad, therefore reflectiveness is bad”.
What specific, concrete, practical changes would you make to the political discourse?
I’d like to hear your answers to those questions before I answer them, if you don’t mind.
You link leads to a (rather obvious) observation that smarter-on-the-average populations do better, economically, than dumber-on-the-average populations. Though, as the PRC example shows, sociopolitical structures do matter.
However being reflective is not at all the same thing as being smart. I’m viewing statements “People should be more reflective” by someone who’s clearly more reflective than the average as epistemically suspect. I don’t see how your link addresses this issue.
You’re not describing any causal pathway by which reflectiveness makes the world a worst case.
That’s not difficult. For example, an increase in reflectiveness is generally accompanied by a decrease in decisiveness. Analysis paralysis is a common problem for highly reflective people. For another example, reflectiveness leads your focus inside, to yourself, and it’s not hard to come up with situations where you should be thinking about the outside world more and about yourself less. Navel gazing is a highly reflective but rarely productive activity.
I’m viewing statements “People should be more reflective” by someone who’s clearly more reflective than the average as epistemically suspect.
This looks like an ad hominem argument to me.
That’s not difficult. For example, an increase in reflectiveness is generally accompanied by a decrease in decisiveness. Analysis paralysis is a common problem for highly reflective people. For another example, reflectiveness leads your focus inside, to yourself, and it’s not hard to come up with situations where you should be thinking about the outside world more and about yourself less. Navel gazing is a highly reflective but rarely productive activity.
Thanks for making concrete arguments. Analysis paralysis is a problem I would like to see more of on a societal level :) But maybe things will get tricky if the least reflective people end up moving first and making society’s decisions. So maybe what we want to aim for is increasing the reflectiveness of the least reflective people who hold power.
I think it makes sense to disentangle the self-focus that you mention and treat it as an orthogonal vector. I’m in favor of people reflecting about important stuff but not unimportant stuff… I hope that clarifies my goals. Insofar as there’s a tradeoff where getting people to reflect about important stuff also means they will waste time reflecting about unimportant stuff, I’m not sure how best to manage that tradeoff.
I’m not finding this conversation especially productive, so I’ll let you have the last word if you want it.
Re: Tyler’s comment on “philosophical work” on x-risk reduction being largely a waste of time.
I’m not sure what he means by “philosophical work” here, but if he means “broad strategic work of the type sometimes done at FHI and MIRI,” well, the whole point of that work is to help answer questions exactly like the one you’re conflicted about here: whether OpenWorm is good or bad differential technological development. It’s precisely because of such work at FHI and MIRI that we have the concept of “differential technological development” in the first place, and that we have a collection of arguments for and against different kinds of differential technological development, even if the answers aren’t yet clear.
Before one country invades another, or cuts or supplies $1B in funding for some project, it would be nice to know whether doing so would be good or bad. That’s why Teller et al. studied the question of whether an atom bomb could ignite the atmosphere, and it’s why FHI and MIRI are doing much of the research we do.
I agree with Luke. It’s funny that Tyler, a pundit, says that pundits are useless for reducing existential risk.
Funding concrete projects is often relatively easy. People can see it and get excited about it. Asking to fund higher-level research is harder.
Concrete work is what governments do. OpenWorm is not going to compete with Obama’s $3 billion BRAIN Initiative. Pundits make these issues political, raise awareness, and thereby lead to huge amounts of funding down the road.
As for the question of whether to favor WBE: I’d be nervous about it. It could also accelerate non-WBE AI through spillover technology, enhancing interest in general in these topics, etc. I don’t have a clear opinion, but the fact that the question is so hard suggests to me that this isn’t the most cost-effective place to push. There are many other donation targets where the benefits clearly outweigh the risks.
Even leaving the definitions of “good” and “bad” aside, can you know for a sufficiently long time horizon?
Five-year consequences we can get a reasonable forecast for. Hundred-year consequences, I submit no one has the slightest clue about.
I like Brian Tomasik’s case for trying to increase reflectiveness in the general population. I would expect that increasing reflectiveness in the general population would, if anything, be self-reinforcing; I’d be a bit surprised to be a “reflectiveness backlash” where people would decide they wanted to start being very unreflective as a result of reflectiveness being promoted too strongly. So increasing reflectiveness would seem to me to be an intervention that is pretty likely to steer humanity in a good direction, or at the very least, make it so that wherever we are in 10 years, we’ll have a better distribution of outcomes in front of us and factors to lean on.
My guess is there are other interventions that also fall in to this category, e.g. improving the quality of political discourse and generally increasing peoples’ rationality. Basically things that would prepare society to better deal with a broad range of tough situations that we might face 100 years out.
I don’t know about that. First, I’m automatically suspicious of arguments which go “General population should be more like me!” and, truth be told, intellectuals tend to be rather fond of such arguments.
Second, reflectiveness is like narcissism, it’s just that instead of focusing on your body you focus on your mind instead. I am not convinced it falls into the “the more the better” category.
Third, the suggested ways of going about it are all very handwavy and wishy-washy.
This is tautology—“we will improve things by improving things”. Try tabooing words like “better” or “improve”. What specific, concrete, practical changes would you make to the political discourse? Why do you think these changes will turn out to have positive consequences in a hundred years?
Well that would be my prior.
That sounds like an argument from analogy to me. You’re not describing any causal pathway by which reflectiveness makes the world a worst case. You’re saying “reflectiveness looks vaguely like this other thing [which is actually totally different], and people to seem to think that thing is bad, therefore reflectiveness is bad”.
I’d like to hear your answers to those questions before I answer them, if you don’t mind.
You link leads to a (rather obvious) observation that smarter-on-the-average populations do better, economically, than dumber-on-the-average populations. Though, as the PRC example shows, sociopolitical structures do matter.
However being reflective is not at all the same thing as being smart. I’m viewing statements “People should be more reflective” by someone who’s clearly more reflective than the average as epistemically suspect. I don’t see how your link addresses this issue.
That’s not difficult. For example, an increase in reflectiveness is generally accompanied by a decrease in decisiveness. Analysis paralysis is a common problem for highly reflective people. For another example, reflectiveness leads your focus inside, to yourself, and it’s not hard to come up with situations where you should be thinking about the outside world more and about yourself less. Navel gazing is a highly reflective but rarely productive activity.
This looks like an ad hominem argument to me.
Thanks for making concrete arguments. Analysis paralysis is a problem I would like to see more of on a societal level :) But maybe things will get tricky if the least reflective people end up moving first and making society’s decisions. So maybe what we want to aim for is increasing the reflectiveness of the least reflective people who hold power.
I think it makes sense to disentangle the self-focus that you mention and treat it as an orthogonal vector. I’m in favor of people reflecting about important stuff but not unimportant stuff… I hope that clarifies my goals. Insofar as there’s a tradeoff where getting people to reflect about important stuff also means they will waste time reflecting about unimportant stuff, I’m not sure how best to manage that tradeoff.
I’m not finding this conversation especially productive, so I’ll let you have the last word if you want it.
If it’s done, it’s done.