These are 6 sample titles I’m considering using. Any thoughts come to mind?
AI-like reward functioning in humans. (Comprehensive model)
Agency in humans
Agency in humans | comprehensive model of why humans do what they do
EA should focus less on AI alignment, more on human alignment
EA’s AI focus will be the end of us all.
EA’s AI alignment focus will be the end of us all. We should focus on human alignment instead
These are 6 sample titles I’m considering using. Any thoughts come to mind?
AI-like reward functioning in humans. (Comprehensive model)
Agency in humans
Agency in humans | comprehensive model of why humans do what they do
EA should focus less on AI alignment, more on human alignment
EA’s AI focus will be the end of us all.
EA’s AI alignment focus will be the end of us all. We should focus on human alignment instead