Strong upvote, even though I think you’re wrong about some important claims here, because you’re being detailed enough for me to reply to.
… which I will do properly (ie, with citations into your post) tomorrow, if it still seems useful to be more specific. But the gist of what I’ll defend better if needed is: I think while it’s quite possible that the predictions function as OpenAI propaganda, that’s separate from whether they’re doing that because they are valuable—if someone had come up with these predictions in a box, isolated from OpenAI, it’d have similar effects; so then the question is separately about upstream causality of why say these things (credit and blame assignment), vs downstream causality of what these things will do (and what to do about it now). The upstream causality seems like a distraction, except inasmuch as it’s relevant to downstream causality (eg, because properly assigned credit or blame might change the landscape of the present). IMO the main concern here is that these predictions, which were already being made by many people around the tech but not so specifically or with such careful argumentation, seem to be somehow being used by OpenAI to further their purposes. If that’s because the predictions turn out correct, that maybe seems worse than if they were wrong, because they’re pretty scary predictions—but either way, it’s not good news that, in my view, there doesn’t seem to be such a thing as bad publicity for AGI, and I still don’t know for sure why that’s happening. And that seems like where most of the value is in figuring out this discussion, to me, at least. Though the view you initially appeared to be writing down, that the predictions themselves are functioning as a propaganda piece in an upstream-causality intent sort of way. does seem to be a common one, so having a good and solid debate about it where we try to figure out and confirm the who-did-what-why a bit might well be worth the attention.
in my view, there doesn’t seem to be such a thing as bad publicity for AGI, and I still don’t know for sure why that’s happening. And that seems like where most of the value is in figuring out this discussion, to me, at least.
It’s an incentive problem.
There is no way to discuss something being dangerous that does not also render it valuable. People are incentivized to seek out value; our entire economy is based on it. It works beautifully, but it is terrible at mitigating externalities. We only dial back from dangerous or bad things after the disaster; so long as doing things is profitable, rational economic actors seek out high-risk activities as far as permitted, because they alone get the profit and the majority of the risk is to other people.
In my view Yudkowsky’s body of work has had two main effects, which run in opposite directions:
Convincing many people that AI is extremely valuable, which is a large part of why we currently are where we are.
Convincing many people that AI is dangerous, which shows no signs of paying off yet but which may be crucially important at some future juncture. I am willing to pronounce it a complete failure at actually causing any regulatory regime whatsoever to come into existence thus far.
Strong upvote, even though I think you’re wrong about some important claims here, because you’re being detailed enough for me to reply to.
… which I will do properly (ie, with citations into your post) tomorrow, if it still seems useful to be more specific. But the gist of what I’ll defend better if needed is: I think while it’s quite possible that the predictions function as OpenAI propaganda, that’s separate from whether they’re doing that because they are valuable—if someone had come up with these predictions in a box, isolated from OpenAI, it’d have similar effects; so then the question is separately about upstream causality of why say these things (credit and blame assignment), vs downstream causality of what these things will do (and what to do about it now). The upstream causality seems like a distraction, except inasmuch as it’s relevant to downstream causality (eg, because properly assigned credit or blame might change the landscape of the present). IMO the main concern here is that these predictions, which were already being made by many people around the tech but not so specifically or with such careful argumentation, seem to be somehow being used by OpenAI to further their purposes. If that’s because the predictions turn out correct, that maybe seems worse than if they were wrong, because they’re pretty scary predictions—but either way, it’s not good news that, in my view, there doesn’t seem to be such a thing as bad publicity for AGI, and I still don’t know for sure why that’s happening. And that seems like where most of the value is in figuring out this discussion, to me, at least. Though the view you initially appeared to be writing down, that the predictions themselves are functioning as a propaganda piece in an upstream-causality intent sort of way. does seem to be a common one, so having a good and solid debate about it where we try to figure out and confirm the who-did-what-why a bit might well be worth the attention.
It’s an incentive problem.
There is no way to discuss something being dangerous that does not also render it valuable. People are incentivized to seek out value; our entire economy is based on it. It works beautifully, but it is terrible at mitigating externalities. We only dial back from dangerous or bad things after the disaster; so long as doing things is profitable, rational economic actors seek out high-risk activities as far as permitted, because they alone get the profit and the majority of the risk is to other people.
In my view Yudkowsky’s body of work has had two main effects, which run in opposite directions:
Convincing many people that AI is extremely valuable, which is a large part of why we currently are where we are.
Convincing many people that AI is dangerous, which shows no signs of paying off yet but which may be crucially important at some future juncture. I am willing to pronounce it a complete failure at actually causing any regulatory regime whatsoever to come into existence thus far.