Um. This looks like a request for scary stories. As in, like, “Let’s all sit in the dark with only the weak light coming from our computer screens and tell each other scary tales about how a big bad AI can eat us”.
Without any specified constraints you are basically asking for horror sci-fi short stories and if that’s what you want you should just say so.
If you actually want analysis, you need to start with at least a couple of pages describing the level of technology that you assume (both available and within easy reach), AI requirements (e.g. in terms of energy and computing substrate), its motivations (malevolent, wary, naive, etc.) and such.
It’s 1920, and the AI earns money by doing arithmetic over the phone. No human computer—not even one with a slide rule! - can ever compete with the AI, and so it ends up doing all the financial calculations for big companies, taking over the world.
This 1920s AI takes over the world the exact same way how OP’s chemistry-simulating AI example does (or the AI from any other such scary story).
By doing something that would be enabled by underlying technologies behind the AI, without the need for any AI.
Far enough in the future, there’s products which are to today as today’s spreadsheet application is to 1920s. For any such product, you can make up a scary story about how the AI does the job of this product and gets immensely powerful.
I think the issue is that a lot of casual readers (/s/listeners, or whatever) of MIRI’s arguments about FAI threat get hung up on post- or mid-singularity AI takeover scenarios simply because they’re hard to “visualize”, having lots of handwavey free parameters like “technology level”. So even if the examples produced here don’t fill in necessarily highly plausible values for the free parameters, they can help less-imaginative casual readers visualize an otherwise abstract and hard-to-follow step in MIRI’s arguments. More rigorous filling-in of the parameters can occur later, or at a higher level.
That’s all assuming that this is being requested for the purposes of popular persuasive materials. I think the MIRI research team would be more specific and/or could come up with such things more easily on their own, if they needed scenarios for serious modeling or somesuch.
Perhaps that’s exactly what this is. Perhaps that is all MIRI wants from us right now. As Mestroyer said, maybe MIRI wants to be able to spin a plausible story for the purpose of convincing people, not for the purpose of actually predicting what would happen.
Eh. It’s not unusual for the government to get experts together and ask in a general sense for worst-case scenario possible disaster situations, with the intent of then working to reduce those risks.
Open-ended brainstorming about some potential AI risk scenarios that could happen in the near future might be useful, if the overall goal of MIRI is to reduce AI risk.
Um. This looks like a request for scary stories. As in, like, “Let’s all sit in the dark with only the weak light coming from our computer screens and tell each other scary tales about how a big bad AI can eat us”.
Without any specified constraints you are basically asking for horror sci-fi short stories and if that’s what you want you should just say so.
If you actually want analysis, you need to start with at least a couple of pages describing the level of technology that you assume (both available and within easy reach), AI requirements (e.g. in terms of energy and computing substrate), its motivations (malevolent, wary, naive, etc.) and such.
Otherwise it’s just an underpants gnomes kind of a story.
Yeah.
I propose we write vintage stories instead:
It’s 1920, and the AI earns money by doing arithmetic over the phone. No human computer—not even one with a slide rule! - can ever compete with the AI, and so it ends up doing all the financial calculations for big companies, taking over the world.
This 1920s AI takes over the world the exact same way how OP’s chemistry-simulating AI example does (or the AI from any other such scary story).
By doing something that would be enabled by underlying technologies behind the AI, without the need for any AI.
Far enough in the future, there’s products which are to today as today’s spreadsheet application is to 1920s. For any such product, you can make up a scary story about how the AI does the job of this product and gets immensely powerful.
I think the issue is that a lot of casual readers (/s/listeners, or whatever) of MIRI’s arguments about FAI threat get hung up on post- or mid-singularity AI takeover scenarios simply because they’re hard to “visualize”, having lots of handwavey free parameters like “technology level”. So even if the examples produced here don’t fill in necessarily highly plausible values for the free parameters, they can help less-imaginative casual readers visualize an otherwise abstract and hard-to-follow step in MIRI’s arguments. More rigorous filling-in of the parameters can occur later, or at a higher level.
That’s all assuming that this is being requested for the purposes of popular persuasive materials. I think the MIRI research team would be more specific and/or could come up with such things more easily on their own, if they needed scenarios for serious modeling or somesuch.
Perhaps that’s exactly what this is. Perhaps that is all MIRI wants from us right now. As Mestroyer said, maybe MIRI wants to be able to spin a plausible story for the purpose of convincing people, not for the purpose of actually predicting what would happen.
So, to give a slightly uncharitable twist to it, we are asked to provide feedstock material for a Dark Arts exercise? X-D
Eh. It’s not unusual for the government to get experts together and ask in a general sense for worst-case scenario possible disaster situations, with the intent of then working to reduce those risks.
Open-ended brainstorming about some potential AI risk scenarios that could happen in the near future might be useful, if the overall goal of MIRI is to reduce AI risk.
MIRI is not the government, LW is not a panel of experts, and such analyses generally start with a long list of things they are conditional on.
No AI risk scenarios are going to happen in the near future.