You don’t save the world by doing inward-looking stuff at a monastery
My model of MAPLE is that this is exactly the crux. The theory of impact that they have is based on developing a potent enough ideology (a “meme”) that then spreads among powerful actors in the real world affecting their actions.
Currently they are in the design phase and are not looking to scale but eventually this “design a trustworthy religion” will become “offer the religion to cyborgs around the world”. Cyborgs are those humans who closely integrate AI, tech into their workflows, building a sort of exocortex.
If we don’t live in the FOOM world, then imagine how gradual disempowerment will play out. We will see increasing inequality in competence, productivity, capability as these models get more powerful. Also these cyborgs will be even more isolated, confused, lonely and due to suffering be keen to explore alternative religions to make sense of the world and decide on what motivations to pursue as meaningful, moral, etc.
It is these cyborgs that will first see consensus reality break down and traditional frameworks of meaning making will fail to help them properly orient to the world. Already we are seeing this with chatbots that are able to provide us “intimacy”, making us feel seen better than our friends. When algorithms can optimise our quantified self better than our own intuition, won’t more people abdicate their power to these systems?
The future of our species will depend on how empathetic, kind, compassionate, wise these cyborgs are. So I believe MAPLE is trying to design a lens to see the world, a perspective that resonates with them, that is true and ensures we continue to believe in truth being more than just predictive power, the ability to manipulate and control the external world, rather it is about living in harmony with other sentient beings.
They have an online course here which is likely more accurate than my interpretation. But during my stay with them as an AI fellow I got to interact with the students there and ask questions about why they believed the key was to cultivate the mind, how mind was chief.
Religions or ideologies are powerful because they form the framework by which we coordinate at scale. The current AI Alignment problem is not just a technical problem but a socio-technical problem, we are dealing with actors taking selfish actions, falling into race dynamics and feeling helpless due to the incentive structures around us.
In this video Soryu talks about how impactful religions have been in determining the flow of history. Towards the end he emphasises how people who deem themselves secular are in fact religious themselves. We have humanistic religions that hold human connection to be sacred and fundamental to meaning, scientific religions that hold reductionist, materialistic perspectives to be an axiomatic truth.
These systems make factual claims about what Is true about the world and then try to act innocent when normative claims (Ought) emerge (interdependently) as a consequence of believing in the factual claims. This relates to what herschel was saying about the culture being confused. If we believe that the basic ontology of reality is matter then concepts like love, joy, friendship will be reduced to transactional details, we can have allies that we cooperate with but we stop believing in anything valuable which cannot be reified. We seem to confused the map with the territory and believe that the representation can fully capture the referent.
I find all this very interesting and uncorrelated with other approaches to solve the alignment problem. I’d love to see more empirical falsifiable tests that could be designed to investigate the sort of moral realism adjacent (maybe?) implications of the claim that there is no hume’s gap.
My model of MAPLE is that this is exactly the crux. The theory of impact that they have is based on developing a potent enough ideology (a “meme”) that then spreads among powerful actors in the real world affecting their actions.
Currently they are in the design phase and are not looking to scale but eventually this “design a trustworthy religion” will become “offer the religion to cyborgs around the world”. Cyborgs are those humans who closely integrate AI, tech into their workflows, building a sort of exocortex.
If we don’t live in the FOOM world, then imagine how gradual disempowerment will play out. We will see increasing inequality in competence, productivity, capability as these models get more powerful. Also these cyborgs will be even more isolated, confused, lonely and due to suffering be keen to explore alternative religions to make sense of the world and decide on what motivations to pursue as meaningful, moral, etc.
It is these cyborgs that will first see consensus reality break down and traditional frameworks of meaning making will fail to help them properly orient to the world. Already we are seeing this with chatbots that are able to provide us “intimacy”, making us feel seen better than our friends. When algorithms can optimise our quantified self better than our own intuition, won’t more people abdicate their power to these systems?
The future of our species will depend on how empathetic, kind, compassionate, wise these cyborgs are. So I believe MAPLE is trying to design a lens to see the world, a perspective that resonates with them, that is true and ensures we continue to believe in truth being more than just predictive power, the ability to manipulate and control the external world, rather it is about living in harmony with other sentient beings.
They have an online course here which is likely more accurate than my interpretation. But during my stay with them as an AI fellow I got to interact with the students there and ask questions about why they believed the key was to cultivate the mind, how mind was chief.
Religions or ideologies are powerful because they form the framework by which we coordinate at scale. The current AI Alignment problem is not just a technical problem but a socio-technical problem, we are dealing with actors taking selfish actions, falling into race dynamics and feeling helpless due to the incentive structures around us.
In this video Soryu talks about how impactful religions have been in determining the flow of history. Towards the end he emphasises how people who deem themselves secular are in fact religious themselves. We have humanistic religions that hold human connection to be sacred and fundamental to meaning, scientific religions that hold reductionist, materialistic perspectives to be an axiomatic truth.
These systems make factual claims about what Is true about the world and then try to act innocent when normative claims (Ought) emerge (interdependently) as a consequence of believing in the factual claims. This relates to what herschel was saying about the culture being confused. If we believe that the basic ontology of reality is matter then concepts like love, joy, friendship will be reduced to transactional details, we can have allies that we cooperate with but we stop believing in anything valuable which cannot be reified. We seem to confused the map with the territory and believe that the representation can fully capture the referent.
I find all this very interesting and uncorrelated with other approaches to solve the alignment problem. I’d love to see more empirical falsifiable tests that could be designed to investigate the sort of moral realism adjacent (maybe?) implications of the claim that there is no hume’s gap.