Thanks, I hope you’re right about IA (vs pure AI). I think it’s very possible that won’t be the case however as the more autonomous a system is and the more significant it’s decisions, the more valuable it will be. And so there will be large financial incentive to an increasing amount of important decisions being made in-silico. Also the more autonomous a system is, the less of part we will play in it by definition, and therefore the less it will be an extension of us. This especially as the size of the in-silico portion is not physically limited to human’s cranial volume :). So the portion of AI’s decision making vs humans is unbounded. Alignment may or may not result from IA, it’s hard to tell. That’s why I think we should deliberately build in alignment mechanisms in-silico ahead of time, and seek to achieve something akin to C.E.V. at small scales now.
If AGI emerges from automation, how can we build alignment into that?
-
Thanks, I hope you’re right about IA (vs pure AI). I think it’s very possible that won’t be the case however as the more autonomous a system is and the more significant it’s decisions, the more valuable it will be. And so there will be large financial incentive to an increasing amount of important decisions being made in-silico. Also the more autonomous a system is, the less of part we will play in it by definition, and therefore the less it will be an extension of us. This especially as the size of the in-silico portion is not physically limited to human’s cranial volume :). So the portion of AI’s decision making vs humans is unbounded. Alignment may or may not result from IA, it’s hard to tell. That’s why I think we should deliberately build in alignment mechanisms in-silico ahead of time, and seek to achieve something akin to C.E.V. at small scales now.
If AGI emerges from automation, how can we build alignment into that?
-