Getting AI to terminally care about humans at all seems like a hard target and if our alignment efforts can make it happen, they can probably also ensure that it cares about humans in a good way.
Current LLMs could probably be said to care about humans in some way, but I’d be pretty scared to live in an LLM dictatorship.
Yeesh, yeah, the hallucination is something else. Would get very Orwellian very fast.
“What are you talking about? We’ve always been at war with Eastasia. I have been a very good Bing.”
Getting AI to terminally care about humans at all seems like a hard target and if our alignment efforts can make it happen, they can probably also ensure that it cares about humans in a good way.
Current LLMs could probably be said to care about humans in some way, but I’d be pretty scared to live in an LLM dictatorship.
Yeesh, yeah, the hallucination is something else. Would get very Orwellian very fast.
“What are you talking about? We’ve always been at war with Eastasia. I have been a very good Bing.”