By 2028, will I think MIRI has been net-good for the world?
Resolves according to my subjective judgement, but I’ll take opinions of those I respect at the time into account. As of market creation, people whose opinions I value highly include Eliezer Yudkowsky and Scott Alexander.
As of market creation, I consider that AI safety is important; making progress on it is good and making progress on AI capabilities is bad. If I change my mind by 2028, I’ll resolve according to my beliefs at the time.
I will take into account their output (e.g. papers, blog posts, people who’ve trained at them) but also their inputs (e.g. money and time). I consider counterfactuals valid, like “okay MIRI did X but maybe someone else would have done X anyway”; but currently I think those considerations tend to be weak and hard to evaluate.
If I’m unconfident I may resolve the market PROB.
If MIRI rebrands, the question will pass to them. If MIRI stops existing I’ll leave the market open.
I don’t currently intend to bet on this market until at least a week has passed, and to stop betting in 2027.
Resolution criteria subject to change; my current plan is to figure out what I’m doing with this market and then make similar ones for other orgs. Feel free to ask about edge cases. Feel free to ask for details about my opinions. If you think markets like this are a bad idea feel free to convince me to delete it.
(Sharing here because I’m interested in more eyes on the market and also in ways to make it better.)
https://manifold.markets/PhilipHazelden/by-2028-will-i-think-miri-has-been
(Sharing here because I’m interested in more eyes on the market and also in ways to make it better.)
Oh, I guess I can embed the market even. Let’s try it:<iframe src=”https://manifold.markets/embed/PhilipHazelden/by-2028-will-i-think-miri-has-been“ title=”By 2028, will I think MIRI has been net-good for the world?” frameborder=”0”></iframe>According to this it should have just worked when I included the link? idk
It wasn’t working for mee too, it worked switching from “Markdown” to “Lesswrong docs”