This is a long shot, but I’m deeply moved by resonance and gotta shoot it:
If you have any intuition about who might have written this essay, I humbly ask you to connect me with the author. Goes without saying but:
Do not dox. DM who you think wrote it + ask for permission.
It is everything I’m working on as: 1) a Russian technologist with 2) theological commitments and literary leanings who’s 3) building a very different alignment bet with 4) the courage to treat some truths as axiomatic rather than derived and 5) pursuing safety as a data science that systematically studies and simulates the conditions that engender an awareness of those truths in our users.
We cannot think our way out of cognitive atrophy and context collapse without reconstructing relational meaning. That’s why all incumbent alignment bets are failure modes:
No one in AI safety seems willing to admit that materialistic conceptions of morality and virtue are incomplete and circular. And that feels like capitulating to superstition, BUT IT IS NOT.
I meet very few people willing to articulate this—or who can even size up the problem thus—but the author here is an exception.
And if you are the author and reading this, please get in touch. I can meet you at the level of Dostoevsky but *want* to meet you at the level of: please read our technical position paper and help me think through possible externalities my team and I are not seeing.
This is a long shot, but I’m deeply moved by resonance and gotta shoot it:
If you have any intuition about who might have written this essay, I humbly ask you to connect me with the author. Goes without saying but: Do not dox. DM who you think wrote it + ask for permission.
It is everything I’m working on as: 1) a Russian technologist with 2) theological commitments and literary leanings who’s 3) building a very different alignment bet with 4) the courage to treat some truths as axiomatic rather than derived and 5) pursuing safety as a data science that systematically studies and simulates the conditions that engender an awareness of those truths in our users.
We cannot think our way out of cognitive atrophy and context collapse without reconstructing relational meaning. That’s why all incumbent alignment bets are failure modes:
No one in AI safety seems willing to admit that materialistic conceptions of morality and virtue are incomplete and circular. And that feels like capitulating to superstition, BUT IT IS NOT.
I meet very few people willing to articulate this—or who can even size up the problem thus—but the author here is an exception.
And if you are the author and reading this, please get in touch. I can meet you at the level of Dostoevsky but *want* to meet you at the level of: please read our technical position paper and help me think through possible externalities my team and I are not seeing.