The general answer on this question is that optimizations should not destroy the ability to model yourself, as modeling yourself is probably the foundational basis of what consciousness is, and the good news is that this is actually somewhat convergent due to the gooder regulator theorem, which states under certain conditions that an optimal regulator must use a model:
The general answer on this question is that optimizations should not destroy the ability to model yourself, as modeling yourself is probably the foundational basis of what consciousness is, and the good news is that this is actually somewhat convergent due to the gooder regulator theorem, which states under certain conditions that an optimal regulator must use a model:
https://www.lesswrong.com/posts/Dx9LoqsEh3gHNJMDk/fixing-the-good-regulator-theorem#Making_The_Notion_Of__Model__A_Lot_Less_Silly
I talk more about how self modelling can rise to consciousness below:
https://www.lesswrong.com/posts/FQhtpHFiPacG3KrvD/seth-explains-consciousness#7ncCBPLcCwpRYdXuG
https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness#FaMEMcpa6mXTybarG
https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness#WEmbycP2ppDjuHAH2
In essence, I’m very close to AST/GNW/GWT theories as well as Anil Seth’s more general framework, and I’ll link AST theory below:
https://www.lesswrong.com/posts/biKchmLrkatdBbiH8/book-review-rethinking-consciousness
https://www.lesswrong.com/posts/NMwGKTBZ9sTM4Morx/linkpost-a-conceptual-framework-for-consciousness