I agree, it still wouldn’t be strong evidence for or against. No offence to any present or future sentient machines out there, but self-honesty isn’t really clearly defined for AIs just yet.
My personal feeling is that LSTMs and transformers with attention on past states would explicitly have a form of self-awareness, by definition. Then I think this bears ethical significance according to something like the compression ratio of the inputs.
As a side note, I enjoy Iain M Banks representation of how AIs could communicate emotions in future in addition to language—by changing colour across a rich field of hues. This doesn’t try to make a direct analogy to our emotions and in that sense makes the problem clearer as, in a sense, a clustering of internal states.
I’ve given a more thorough background to this idea in a presentation here https://docs.google.com/presentation/d/1VLUdV8ZFvS_GJdfQC-k7-kMhUrF0kzvm6y-HLEaHoCU and I am trying to work it through more thoroughly. The essential point is to consider mutualistic agency as a potentially desired and even critical feature of systems that could be considered ‘friendly’ to humans and self-determination as an important form of agency that lends itself to a mathematical analysis via conditional transfer entropy. This is very much an early stage analysis, however what I do think is that our capacities to affect the world are increasing much faster than how precisely we understand what it is we want. In some sense I think it’s necessary to understand very precisely the things that are already really obviously important to all humans. Otherwise in our industrial exuberance it seems quite likely we may end up with and engineer worlds that literally no-one wants.