When Eliezer says that we need to install off-switches for ASI by tracking all GPUs and be able to turn them off at any time, does he really mean just GPUs or also other computational components such as usual CPUs?
Because even though today’s LLMs are trained and do inference on GPUs, novel AI architectures will probably be able to achieve the same or even higher capabilities while running entirely on CPUs.
And also he said in the recent Ezra Klein Podcast that it’s okay if people keep their normal gaming GPUs, if I remember correctly. I understand this because the RTX GPUs or whatever the current gaming standard is, are not able to train LLMs on the scale of GPT-5 or something, but as Steven Byrnes pointed out a while ago, there is probably a much more efficient AI architecture possible, which would most likely enable training models with GPT-5 or even higher-level capabilities on a normal RTX GPU.
As we can see when looking at our own brains, it’s not the very nature of intelligence that it needs a lot of energy. The brain runs at 20 watts or so I think.
If someone thinks they might have found a new potential x-risk concern, what is the appropriate responsible-disclosure path?
So how would someone like that communicate these ideas in a careful way that does not immediately increase x-risk because less careful people might want to try out these ideas?
When Eliezer says that we need to install off-switches for ASI by tracking all GPUs and be able to turn them off at any time, does he really mean just GPUs or also other computational components such as usual CPUs?
Because even though today’s LLMs are trained and do inference on GPUs, novel AI architectures will probably be able to achieve the same or even higher capabilities while running entirely on CPUs.
And also he said in the recent Ezra Klein Podcast that it’s okay if people keep their normal gaming GPUs, if I remember correctly. I understand this because the RTX GPUs or whatever the current gaming standard is, are not able to train LLMs on the scale of GPT-5 or something, but as Steven Byrnes pointed out a while ago, there is probably a much more efficient AI architecture possible, which would most likely enable training models with GPT-5 or even higher-level capabilities on a normal RTX GPU.
As we can see when looking at our own brains, it’s not the very nature of intelligence that it needs a lot of energy. The brain runs at 20 watts or so I think.
If someone thinks they might have found a new potential x-risk concern, what is the appropriate responsible-disclosure path?
So how would someone like that communicate these ideas in a careful way that does not immediately increase x-risk because less careful people might want to try out these ideas?