An independent researcher/blogger/philosopher about intelligence and agency (esp. Active Inference), alignment, ethics, interaction of the AI transition with the sociotechnical risks (epistemics, economics, human psychology), collective mind architecture, research strategy and methodology.
Twitter: https://twitter.com/leventov. E-mail: leventov.ru@gmail.com (the preferred mode of communication). I’m open to collaborations and work.
Presentations at meetups, workshops and conferences, some recorded videos.
I’m a founding member of the Gaia Consoritum, on a mission to create a global, decentralised system for collective sense-making and decision-making, i.e., civilisational intelligence. Drop me a line if you want to learn more about it and/or join the consoritum.
You can help to boost my sense of accountability and give me a feeling that my work is valued by becoming a paid subscriber of my Substack (though I don’t post anything paywalled; in fact, on this blog, I just syndicate my LessWrong writing).
For Russian speakers: русскоязычная сеть по безопасности ИИ, Telegram group.
This looks like a very good moment for the AI safety community to push for a much more deliberate approach to R&D at AGI labs. We must not squander this moment. This is not quite a “sinister stumble”, an event of another kind, but of comparable positive optionality for AI safety.
By “a more deliberate approach to R&D”, I mean researching much more deeply from both theoretical (scientific) and interpretability standpoints what’s going on with (self-)awareness, agency, and feeling in these networks, and publishing their results academically.