A lot of predictions about AI psychology are premised on the AI being some form of deep learning algorithm. From what I can see, deep learning requires geometric computing power for linear gains in intelligence, and thus (practically speaking) cannot scale to sentience.
For a more expert/in depth take look at: https://arxiv.org/pdf/2007.05558.pdf
Why do people think deep learning algorithms can scale to sentience without unreasonable amounts of computational power?
Counterprediction: The Ukrainian government will fold without a (significant) fight.