would he be Google’s lead AI scientist if he didn’t? He’d have to be insane or incredibly psychopathic.
What matters is not whether p(doom) is low or high, but whether his joining GDM would increase or decrease p(doom). If his joining GDM changed p(doom)[1] from 0.5 to 0.499, then it would arguably be a noble act. Alas, there would be an obvious counterargument like the belief that he decreased p(doom) by researching at GDM being erroneous.
However, doom could also be a blind spot, as happened with Musk who decided to skip red-teaming of Grok to the point of the MechaHitler scandal or Grok ranting about white genocide in S.Africa…
p(doom) alone could also be a misguided measure. Suppose that doom is actually caused by adopting neuralese without having absolutely solved alignment, while creating an alternate well-monitorable architecture is genuinely hard. If the efforts invested in creating the architecture are far from the threshold where it can compete with neuralese, then a single person joining said efforts would also likely commit a noble act, but it would alter p(doom) only if lots of people do so.
While this act does actually provide dignity in the Yudkowsky sense, one can also imagine a scenario where Anthropoidic doubles down on the alternate architecture while xRiskAI or OpenBrain uses neuralese, wins the capabilities race and has Anthropoidic shut down.
I think that then goes to my second point though: supposing he did believe that p(doom) is high, and worked as lead AI scientist at Google regardless due to utilitarian calculations, would he talk freely about it to the first passerby?
Politically speaking it would be quite a hefty thing to say. If he wanted to say it publicly, he would do so in a dedicated forum where he gets to control the narrative best. If he wanted to keep it secret he simply wouldn’t say it. Either way, talking about it lightly seems out of the question.
What matters is not whether p(doom) is low or high, but whether his joining GDM would increase or decrease p(doom). If his joining GDM changed p(doom)[1] from 0.5 to 0.499, then it would arguably be a noble act. Alas, there would be an obvious counterargument like the belief that he decreased p(doom) by researching at GDM being erroneous.
However, doom could also be a blind spot, as happened with Musk who decided to skip red-teaming of Grok to the point of the MechaHitler scandal or Grok ranting about white genocide in S.Africa…
p(doom) alone could also be a misguided measure. Suppose that doom is actually caused by adopting neuralese without having absolutely solved alignment, while creating an alternate well-monitorable architecture is genuinely hard. If the efforts invested in creating the architecture are far from the threshold where it can compete with neuralese, then a single person joining said efforts would also likely commit a noble act, but it would alter p(doom) only if lots of people do so.
While this act does actually provide dignity in the Yudkowsky sense, one can also imagine a scenario where Anthropoidic doubles down on the alternate architecture while xRiskAI or OpenBrain uses neuralese, wins the capabilities race and has Anthropoidic shut down.
I think that then goes to my second point though: supposing he did believe that p(doom) is high, and worked as lead AI scientist at Google regardless due to utilitarian calculations, would he talk freely about it to the first passerby?
Politically speaking it would be quite a hefty thing to say. If he wanted to say it publicly, he would do so in a dedicated forum where he gets to control the narrative best. If he wanted to keep it secret he simply wouldn’t say it. Either way, talking about it lightly seems out of the question.