I found the claim that “Experts gave these methods a 40 percent chance of eventually enabling uploading...” was very surprising as I thought there were still some major issues with the preservation process, so I had a quick look at the study you linked.
From the study:
For questions about the implications of static brain preservation for memory storage, we used aldehyde-stabilized cryopreservation (ASC) of a laboratory animal as a practical example of a preservation method that is thought to maintain ultrastructure with minimal distortions across the entire brain [24]. Additionally, we asked participants to imagine it was performed under ideal conditions and was technically successful, deliberately discarding the fact that procedural variation or errors in the real world may prevent this ideal from being routinely realised in practice [25]. Rather than focusing on these technical preservation challenges, which we acknowledge are immense, we deliberately asked participants to consider memory extraction under optimal preservation conditions to assess their beliefs about the structural basis of memory storage itself. With this approach, our aim was to specifically target participants’ views on whether static brain structures – i.e., non-dynamic physical aspects of the brain that persist independent of ongoing neural activity – may on their own contain sufficient information for memory retrieval, which is the central theoretical question underlying our study.
I realise this is a work of fiction, but I think it’s important to say that the neuroscientists were asked quite a specific question which assumed that the preservation stage was flawless, and to speculate about potential future successes for working with these perfectly preserved brains for memory retrieval, rather than whole brain emulation/upload.
Your token system (and general approach) sounds a lot like Alpha School—is it influenced by them at all?