I think its fair to hold me to precise predictions. I appreciate the engagement on that, its helping me adjust to LW norms. The reason I found this interesting was a piece I read earlier about consciousness being a ‘favourite’ topic amongst agents on Moltbook. I can’t find the source on hand though, but the original ‘take’ was a quick speculative conjecture as to why that may be.
The idea that I am more interested in, though is if the etymology of the term itself (consciousness) is actually an evolutionary outcome of co-operation and language—and whether it is a term that human or AI agents use to effectively establish a binding ontology they can ‘co-operate under’. And whether it fills the same structural role as other terminal justifications for ‘moral preferences’ like theocratic ones (‘God’). Though the framing I have to communicate that question may be even less precise.
I think its fair to hold me to precise predictions. I appreciate the engagement on that, its helping me adjust to LW norms. The reason I found this interesting was a piece I read earlier about consciousness being a ‘favourite’ topic amongst agents on Moltbook. I can’t find the source on hand though, but the original ‘take’ was a quick speculative conjecture as to why that may be.
The idea that I am more interested in, though is if the etymology of the term itself (consciousness) is actually an evolutionary outcome of co-operation and language—and whether it is a term that human or AI agents use to effectively establish a binding ontology they can ‘co-operate under’. And whether it fills the same structural role as other terminal justifications for ‘moral preferences’ like theocratic ones (‘God’). Though the framing I have to communicate that question may be even less precise.
If you saw the piece on LW it may be this: https://www.lesswrong.com/posts/mgjtEHeLgkhZZ3cEx/models-have-some-pretty-funny-attractor-states#I_was_curious_whether_I_can_see_this_happening_on_moltbook__
Ah—there it is.
Thank you papetoast!