In the Age of Em Hanson lays out how even brain emulations that aren’t smarter than the smartest humans could out-compete humanity very fast.
I didn’t mean to imply that advanced computing (whether it is AI, IA or Ems) was not potentially dangerous. See my reply to Viliam for my current strategy.
If formal proof doesn’t work that only indicates that everything is more dangerous because you can’t use it to create safety.
Nothing is more dangerous, it is as dangerous as it was :) It is more dangerous if formal proof doesn’t work and everyone who cares about AI safety is working on formal proofs.
You can’t prove everything you want to know about physics with formal proofs. That doesn’t mean that it isn’t valuable that physicist prove theorems about abstract physical laws.
It’s also not the case that everyone working on FAI tries the same approach.
I think the physicist and the AI researcher are in different positions.
One can chop a small bit off the world and focus on that, at a certain scale or under certain conditions.
The other has to create something that can navigate the entire world and potentially get to know it in better or very different ways than we do. It is unbounded in what it can know and how it might be best shaped.
It is this asymmetry that I think makes their jobs very different.
It’s also not the case that everyone working on FAI tries the same approach.
Thanks. I had almost written off LW and by extension MIRI completely inimical to proof based AI research. I’m a bit out of the loop have you got any recommendations of people working along the lines I am thinking?
If you look at the CFAR theory of action of first focusing on getting reasoning right to think more clearly about AI risk, that’s not a strategy based on mathematical proofs.
I didn’t mean to imply that advanced computing (whether it is AI, IA or Ems) was not potentially dangerous. See my reply to Viliam for my current strategy.
Nothing is more dangerous, it is as dangerous as it was :) It is more dangerous if formal proof doesn’t work and everyone who cares about AI safety is working on formal proofs.
You can’t prove everything you want to know about physics with formal proofs. That doesn’t mean that it isn’t valuable that physicist prove theorems about abstract physical laws.
It’s also not the case that everyone working on FAI tries the same approach.
I think the physicist and the AI researcher are in different positions.
One can chop a small bit off the world and focus on that, at a certain scale or under certain conditions.
The other has to create something that can navigate the entire world and potentially get to know it in better or very different ways than we do. It is unbounded in what it can know and how it might be best shaped.
It is this asymmetry that I think makes their jobs very different.
Thanks. I had almost written off LW and by extension MIRI completely inimical to proof based AI research. I’m a bit out of the loop have you got any recommendations of people working along the lines I am thinking?
If you look at the CFAR theory of action of first focusing on getting reasoning right to think more clearly about AI risk, that’s not a strategy based on mathematical proofs.