Does anybody here have any strong reason to believe that the ML research community norm of “not taking AGI discussion seriously” stems from a different place than the oil industry’s norm of “not taking carbon dioxide emission discussion seriously”?
I’m genuinely split. I can think of one or two other reasons there’d be a consensus position of dismissiveness (preventing bikeshedding, for example), but at this point I’m not sure, and it affects how I talk to ML researchers.
I’m not sure the “ML Research Community” is cohesive enough (nor, in fact, well-defined enough) to have very strong norms about this. Further, it’s not clear that there needs to be a “consensus reasoning” even if there is a norm—different members could have different reasons for not bringing it up, and once it’s established, it can be self-propagating: people don’t bring it up because their peers don’t bring it up.
I think if you’re looking for ways to talk to ML researchers, start small, and see what those particular researchers think and how they react to different approaches. If you find some that work, then expand it to more scalable talks to groups of researchers.
I don’t expect AI researchers to achieve AGI before they find one or more horrible uses for non-general AI tools, which may divert resources, or change priorities, or do something else which prevents true AGI from ever being developed.
Because of it’s low chance of existential risk or a singularity utopia. Here’s the thing, technologies are adopted first at a low level and at early adopters, then it becomes cheaper and better, than it more or less becomes very popular. No technology ever had the asymptotic growth or singularity that ML/AI advocates claim to have happened. So we should be very skeptical about any claims of existential risks.
On climate change, we both know it will be serious and that it is not an existential risk or a civilization collapse disaster.
Does anybody here have any strong reason to believe that the ML research community norm of “not taking AGI discussion seriously” stems from a different place than the oil industry’s norm of “not taking carbon dioxide emission discussion seriously”?
I’m genuinely split. I can think of one or two other reasons there’d be a consensus position of dismissiveness (preventing bikeshedding, for example), but at this point I’m not sure, and it affects how I talk to ML researchers.
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
Upton Sinclair
I’m not sure the “ML Research Community” is cohesive enough (nor, in fact, well-defined enough) to have very strong norms about this. Further, it’s not clear that there needs to be a “consensus reasoning” even if there is a norm—different members could have different reasons for not bringing it up, and once it’s established, it can be self-propagating: people don’t bring it up because their peers don’t bring it up.
I think if you’re looking for ways to talk to ML researchers, start small, and see what those particular researchers think and how they react to different approaches. If you find some that work, then expand it to more scalable talks to groups of researchers.
I don’t expect AI researchers to achieve AGI before they find one or more horrible uses for non-general AI tools, which may divert resources, or change priorities, or do something else which prevents true AGI from ever being developed.
Because of it’s low chance of existential risk or a singularity utopia. Here’s the thing, technologies are adopted first at a low level and at early adopters, then it becomes cheaper and better, than it more or less becomes very popular. No technology ever had the asymptotic growth or singularity that ML/AI advocates claim to have happened. So we should be very skeptical about any claims of existential risks.
On climate change, we both know it will be serious and that it is not an existential risk or a civilization collapse disaster.
I think best way to look at it is climate change way before it was mainstream