Have enough people at MetaMed been influenced sufficiently by (meatspace) LessWrong/think ‘similarly enough’ to LW rationality that we should precommit to updating by prespecified amountson the effectiveness of LW rationality in response to its successes and failures?
At a first glance, I’m not sure humans can update by prespecified amounts, much less prespecified amounts of the right quantity in this case: something like >95% of all startups fail for various reasons, so even if LW-think could double the standard odds (let’s not dicker around with merely increasing effectiveness by 50% or something, let’s go all the way to +100%!), you’re trying to see the difference between… a 5% success rate and a 10% success rate. One observation just isn’t going to count for much here.
Interesting question! Since it’s an especially interesting question for those not fully in the in-crowd I thought it might be worth rephrasing in less technical language:
Is MetaMed comprised of LessWrong folks or significantly influenced by LessWrong folks, or that style of thinking? If so, this sounds like a great test of the real-world efficacy of LessWrong ideas. In other words, if MetaMed succeeds that’s some powerful evidence that this rationality shit works! (And to be intellectually honest we have to also precommit to admitting that—should MetaMed fail—it’s evidence that it doesn’t.)
PS: Since Michael Vassar is involved it’s safe to say the answer to the first part is yes!
Have enough people at MetaMed been influenced sufficiently by (meatspace) LessWrong/think ‘similarly enough’ to LW rationality that we should precommit to updating by prespecified amountson the effectiveness of LW rationality in response to its successes and failures?
At a first glance, I’m not sure humans can update by prespecified amounts, much less prespecified amounts of the right quantity in this case: something like >95% of all startups fail for various reasons, so even if LW-think could double the standard odds (let’s not dicker around with merely increasing effectiveness by 50% or something, let’s go all the way to +100%!), you’re trying to see the difference between… a 5% success rate and a 10% success rate. One observation just isn’t going to count for much here.
Definitely, though others must decide the update size.
Interesting question! Since it’s an especially interesting question for those not fully in the in-crowd I thought it might be worth rephrasing in less technical language:
Is MetaMed comprised of LessWrong folks or significantly influenced by LessWrong folks, or that style of thinking? If so, this sounds like a great test of the real-world efficacy of LessWrong ideas. In other words, if MetaMed succeeds that’s some powerful evidence that this rationality shit works! (And to be intellectually honest we have to also precommit to admitting that—should MetaMed fail—it’s evidence that it doesn’t.)
PS: Since Michael Vassar is involved it’s safe to say the answer to the first part is yes!
But, either way, not much evidence at all.