Maybe it’s worth mentioning here that Carl’s p(doom) is only ~20%[1], compared to my ~80%. (I can’t find a figure for @cousin_it, but I’m guessing it’s closer to mine than Carl’s. BTW every AI I asked started hallucinating or found a quote online from someone else and told me he said it.) It seems intuitive to me that at a higher p(doom), the voices in one’s moral parliament saying “let’s not touch this” become a lot louder.
“Depending on the day I might say one in four or one in five that we get an AI takeover that seizes control of the future, makes a much worse world than we otherwise would have had and with a big chance that we’re all killed in the process.”
I notice that this is only talking about “AI takeover” whereas my “doom” includes a bunch of other scenarios, but if Carl is significantly worried about other scenarios, he perhaps would have given a higher overall p(doom) in this interview or elsewhere.
Maybe it’s worth mentioning here that Carl’s p(doom) is only ~20% [1], compared to my ~80%. (I can’t find a figure for @cousin_it, but I’m guessing it’s closer to mine than Carl’s. BTW every AI I asked started hallucinating or found a quote online from someone else and told me he said it.) It seems intuitive to me that at a higher p(doom), the voices in one’s moral parliament saying “let’s not touch this” become a lot louder.
“Depending on the day I might say one in four or one in five that we get an AI takeover that seizes control of the future, makes a much worse world than we otherwise would have had and with a big chance that we’re all killed in the process.”
I notice that this is only talking about “AI takeover” whereas my “doom” includes a bunch of other scenarios, but if Carl is significantly worried about other scenarios, he perhaps would have given a higher overall p(doom) in this interview or elsewhere.