Sorry for budging in, but I can’t help but notice I agree with both what you and So8res are saying, but I think you aren’t arguing about the same thing.
You seem to be talking about the dimension of “confidence,” “obviousness,” etc. and arguing that most proponents of AI concern seem to have enough of it, and shouldn’t increase it too much.
So8res seems to be talking about another dimension which is harder to name. “Frank futuristicness” maybe? Though not really.
If you adjust your “frank futuristicness” to an absurdly high setting, you’ll sound a bit crazy. You’ll tell lawmakers “I’m not confident, and this isn’t obvious, but I think that unless we pause AI right now, we risk a 50% chance of building a misaligned superintelligence. It might use nanobots to convert all the matter in the universe into paperclips, and the stars and galaxies will fade one by one.”
But if you adjust your “frank futuristicness” to an absurdly low setting, you’ll end up being ineffective. You’ll say “I am confident, and this is obvious: we should regulate AI companies more because they are less regulated than other companies. For example the companies which research vaccines have to jump through so many clinical trial loops, meanwhile AI models are just as untested as vaccines and really, they have just as much potential to harm people. And on top of that we can’t even prove that humanity is safe from AI, so we should be careful. I don’t want to give an examples for how exactly humanity isn’t safe from AI because it might sound like sci-fi, so I’ll only talk about it abstractly. My point is, we should follow the precautionary principle and be slow because it doesn’t hurt.”
There is an optimum level of “frank futuristicness” between the absurdly high setting and the absurdly low setting. But most people are far below this optimum level.
Ok; what do you think of Soares’s claim that SB-1047 should have been made stronger and the connections to existential risk should have been made more clearly? That seems probably false to me.
I think that if it were to go ahead, it should have been made stronger and clearer. But this wouldn’t have been politically feasible, and therefore if that were the standard being aimed for it wouldn’t have gone ahead.
This I think would have been better than the outcome that actually happened.
All I know is that a lot of organizations seem to be shy talking about the AI takeover risk, and the endorsements the book got surprised me, regarding how receptive government officials are. (Considering how little cherry-picking they did.)
My very uneducated guess is that Newsom vetoed the bill because he was more of a consequentialist/Longtermist than the cause-driven lawmakers who passed the bill, so one can argue the failure mode was a “lack of appeal to consequentialist interests.” One might argue “it passed by cause-driven lawmakers by a wide margin, but got blocked by the consequentialist.” But the cause-driven vs. consequentialist motives are pure speculation, I know nothing about these people aside from Newsom’s explanation...
From my perspective, FWIW, the endorsements we got would have been surprising even if they had been maximally cherry-picked. You usually just can’t find cherries like those.
Sorry for budging in, but I can’t help but notice I agree with both what you and So8res are saying, but I think you aren’t arguing about the same thing.
You seem to be talking about the dimension of “confidence,” “obviousness,” etc. and arguing that most proponents of AI concern seem to have enough of it, and shouldn’t increase it too much.
So8res seems to be talking about another dimension which is harder to name. “Frank futuristicness” maybe? Though not really.
If you adjust your “frank futuristicness” to an absurdly high setting, you’ll sound a bit crazy. You’ll tell lawmakers “I’m not confident, and this isn’t obvious, but I think that unless we pause AI right now, we risk a 50% chance of building a misaligned superintelligence. It might use nanobots to convert all the matter in the universe into paperclips, and the stars and galaxies will fade one by one.”
But if you adjust your “frank futuristicness” to an absurdly low setting, you’ll end up being ineffective. You’ll say “I am confident, and this is obvious: we should regulate AI companies more because they are less regulated than other companies. For example the companies which research vaccines have to jump through so many clinical trial loops, meanwhile AI models are just as untested as vaccines and really, they have just as much potential to harm people. And on top of that we can’t even prove that humanity is safe from AI, so we should be careful. I don’t want to give an examples for how exactly humanity isn’t safe from AI because it might sound like sci-fi, so I’ll only talk about it abstractly. My point is, we should follow the precautionary principle and be slow because it doesn’t hurt.”
There is an optimum level of “frank futuristicness” between the absurdly high setting and the absurdly low setting. But most people are far below this optimum level.
Ok; what do you think of Soares’s claim that SB-1047 should have been made stronger and the connections to existential risk should have been made more clearly? That seems probably false to me.
I think that if it were to go ahead, it should have been made stronger and clearer. But this wouldn’t have been politically feasible, and therefore if that were the standard being aimed for it wouldn’t have gone ahead.
This I think would have been better than the outcome that actually happened.
To be honest, I don’t know.
All I know is that a lot of organizations seem to be shy talking about the AI takeover risk, and the endorsements the book got surprised me, regarding how receptive government officials are. (Considering how little cherry-picking they did.)
My very uneducated guess is that Newsom vetoed the bill because he was more of a consequentialist/Longtermist than the cause-driven lawmakers who passed the bill, so one can argue the failure mode was a “lack of appeal to consequentialist interests.” One might argue “it passed by cause-driven lawmakers by a wide margin, but got blocked by the consequentialist.” But the cause-driven vs. consequentialist motives are pure speculation, I know nothing about these people aside from Newsom’s explanation...
From my perspective, FWIW, the endorsements we got would have been surprising even if they had been maximally cherry-picked. You usually just can’t find cherries like those.