Well-written, and good points. I hope and pray that we get to this point in AT alignment. However, I think it might be wise to first make very certain that AI isn’t going to kill everyone, before progressing to improving its cognitive affordances.
Thank you! Survival risk matters, but I’m more focused on systems that don’t need to be controlled to behave safely. Beyond malice, I believe most failures are a result of misalignment under stress.
Well-written, and good points. I hope and pray that we get to this point in AT alignment. However, I think it might be wise to first make very certain that AI isn’t going to kill everyone, before progressing to improving its cognitive affordances.
Thank you! Survival risk matters, but I’m more focused on systems that don’t need to be controlled to behave safely. Beyond malice, I believe most failures are a result of misalignment under stress.