I think this is a worse question now? Like, I expect OpenAI leadership explicitly thinks of themselves as increasing x-risk a bit by choosing to attempt to speed up progress to AGI.
On net they expect it‘s probably the right call, but they also probably would say “Yes, our actions are intentionally increasing the chances of x-risk in some worlds, but on net we think it’s improving things”. And then, supposing they’re wrong, and those worlds are the actual world, then they’re intentionally increasing x-risk. And now the question tells me to ignore that possibility.
The initial question made no discussion of intention, seemed better to me.
Like, I expect OpenAI leadership explicitly thinks of themselves as increasing x-risk a bit by choosing to attempt to speed up progress to AGI.
Do you think that they think they are increasing x-risk in expectation (where the expectation is according to their beliefs)? I’d find that extremely surprising (unless their reasoning is something like “yes, we raise it from 1 in a trillion to 2 in a trillion, this doesn’t matter”).
Hum, my perspective is that in the example that you describe, OpenAI isn’t intentionally increasing the risks, in that they think it improves things over all. My line at “intentionally increasing xrisks” would be to literally decide to act while thinking/knowing that your action are making things worse in general for xrisks, which doesn’t sound like your example.
I think this is a worse question now? Like, I expect OpenAI leadership explicitly thinks of themselves as increasing x-risk a bit by choosing to attempt to speed up progress to AGI.
On net they expect it‘s probably the right call, but they also probably would say “Yes, our actions are intentionally increasing the chances of x-risk in some worlds, but on net we think it’s improving things”. And then, supposing they’re wrong, and those worlds are the actual world, then they’re intentionally increasing x-risk. And now the question tells me to ignore that possibility.
The initial question made no discussion of intention, seemed better to me.
Do you think that they think they are increasing x-risk in expectation (where the expectation is according to their beliefs)? I’d find that extremely surprising (unless their reasoning is something like “yes, we raise it from 1 in a trillion to 2 in a trillion, this doesn’t matter”).
See my reply downthread, responding to where you asked Oli for an example.
Hum, my perspective is that in the example that you describe, OpenAI isn’t intentionally increasing the risks, in that they think it improves things over all. My line at “intentionally increasing xrisks” would be to literally decide to act while thinking/knowing that your action are making things worse in general for xrisks, which doesn’t sound like your example.