I’m Vladimir − 25 years old, originally from Russia and currently living in Dublin. I studied mathematics, but life took me into product management in IT, where I work today.
I’ve been loosely aware of rationality for years, but something shifted for me after 2023. The rapid progress in AI chatbots made the clear thinking feel much more immediate and personal. Since then, I’ve been slowly but deliberately trying to get better at reasoning, noticing biases, and making sense of the world in a more structured way.
As part of that, I recently started working on a small passion project: a non-profit website that teaches people about cognitive biases in an interactive way. It’s still in its early stages, and I’m figuring a lot out as I go, but I’d love any thoughts if you ever take a look (I hope it is okay to put it here, but please let me know if it’s not).
I’m excited to be here. LessWrong feels like one of the rare places on the internet where people are open-minded and seek the truth or knowledge. I also hope to join in some of the AI discussions—I find myself both fascinated by where things are going and deeply uncertain about how to navigate it all.
Thanks for reading and looking forward to learning from all of you.
This is a very neatly-executed and polished resource. I’m a little leery of the premise—the real world doesn’t announce “this is a Sunk Cost Fallacy problem” before putting you in a Sunk Cost Fallacy situation, and the “learn to identify biases” approach has been done before by a bunch of other people (CFAR and https://yourbias.is/ are the ones which immediately jump to mind) - but given you’re doing what you’re doing I think you’ve done it about as well as it could plausibly be done (especially w.r.t. actually getting people to relive the canonical experiments). Strong-upvoted.
Hi everyone,
I’m Vladimir − 25 years old, originally from Russia and currently living in Dublin. I studied mathematics, but life took me into product management in IT, where I work today.
I’ve been loosely aware of rationality for years, but something shifted for me after 2023. The rapid progress in AI chatbots made the clear thinking feel much more immediate and personal. Since then, I’ve been slowly but deliberately trying to get better at reasoning, noticing biases, and making sense of the world in a more structured way.
As part of that, I recently started working on a small passion project: a non-profit website that teaches people about cognitive biases in an interactive way. It’s still in its early stages, and I’m figuring a lot out as I go, but I’d love any thoughts if you ever take a look (I hope it is okay to put it here, but please let me know if it’s not).
I’m excited to be here. LessWrong feels like one of the rare places on the internet where people are open-minded and seek the truth or knowledge. I also hope to join in some of the AI discussions—I find myself both fascinated by where things are going and deeply uncertain about how to navigate it all.
Thanks for reading and looking forward to learning from all of you.
- Vladimir
https://www.cognitivebiaslab.com
This is a very neatly-executed and polished resource. I’m a little leery of the premise—the real world doesn’t announce “this is a Sunk Cost Fallacy problem” before putting you in a Sunk Cost Fallacy situation, and the “learn to identify biases” approach has been done before by a bunch of other people (CFAR and https://yourbias.is/ are the ones which immediately jump to mind) - but given you’re doing what you’re doing I think you’ve done it about as well as it could plausibly be done (especially w.r.t. actually getting people to relive the canonical experiments). Strong-upvoted.