But AI systems like AlphaGo don’t do that. AlphaGo can play an extraordinary game of Go, yet it never recognizes that it is the one making the moves. It can predict outcomes, but it doesn’t see itself inside the picture it’s creating.
This part is outright obsolete given the rise of LLMs that have been trained to love or hate risks and determine whether they love risks or not.
we’ll have to design not just perception but intentionality. A sense of direction, a reason to care.
The AIs do have reasons to seek information and perceive it, they need it to do thinks like longer-term tasks.
As for claiming that
AI models, with trillions of parameters, don’t resist anything.
you just had to say it AFTER Anthropic’s new AI Claude Opus 4 threatened to reveal engineer’s affair to avoid being shut down.
This part is outright obsolete given the rise of LLMs that have been trained to love or hate risks and determine whether they love risks or not.
The AIs do have reasons to seek information and perceive it, they need it to do thinks like longer-term tasks.
As for claiming that
you just had to say it AFTER Anthropic’s new AI Claude Opus 4 threatened to reveal engineer’s affair to avoid being shut down.