I’d ask the question whether things typically are aligned or not. There’s a good argument that many systems are not aligned. Ecosystems, society, companies, families, etc all often have very unaligned agents. AI alignment, as you pointed out, is a higher stakes game.
I’d ask the question whether things typically are aligned or not. There’s a good argument that many systems are not aligned. Ecosystems, society, companies, families, etc all often have very unaligned agents. AI alignment, as you pointed out, is a higher stakes game.
Just out of interest, how exactly would you ask that question?
Certainly. This is a big issue in our time. Something needs to be done or things may really go off the rails.
Indeed. Is there anything that can be done?
It is a very high-stakes game. How might we proceed?