I think playing around with ideas like this in detail is underrated. Noting that I’m only criticizing the strong version because I like it overall. Both ‘study’ and your named variables here are serving as aether variables. If you have a flexible enough representation then you can use it to represent anything, unfortunately you’ve also gutted it of predictive power (vs post hoc explanation).
Secondly and more constructively: I’m reminded of Donell Meadow’s Leverage Points.
If you have a flexible enough representation then you can use it to represent anything, unfortunately you’ve also gutted it of predictive power (vs post hoc explanation).
I think this can be wrong:
“Y” and “D” are not empty symbols, they come with an objective enough metric (the metric of “general importance”). So, it’s like saying that “A” and “B” in the Bayes’ theorem are empty symbols without predictive power. And I believe the analogy with Bayes’ theorem is not accidental, by the way, because I think you could turn my idea into a probabilistic inference rule.
If my method can’t help to predict good ideas, it still can have predictive power if it evaluates good ideas correctly (before they get universally recognized as good). Not every important idea is immediately recognized as important.
Can you expand on the connection with Leverage Points? Seems like 12 Leverage Points is an extremely specific and complicated idea (doesn’t mean it can’t be good in its own field, though).
I think playing around with ideas like this in detail is underrated. Noting that I’m only criticizing the strong version because I like it overall. Both ‘study’ and your named variables here are serving as aether variables. If you have a flexible enough representation then you can use it to represent anything, unfortunately you’ve also gutted it of predictive power (vs post hoc explanation).
Secondly and more constructively: I’m reminded of Donell Meadow’s Leverage Points.
I think this can be wrong:
“Y” and “D” are not empty symbols, they come with an objective enough metric (the metric of “general importance”). So, it’s like saying that “A” and “B” in the Bayes’ theorem are empty symbols without predictive power. And I believe the analogy with Bayes’ theorem is not accidental, by the way, because I think you could turn my idea into a probabilistic inference rule.
If my method can’t help to predict good ideas, it still can have predictive power if it evaluates good ideas correctly (before they get universally recognized as good). Not every important idea is immediately recognized as important.
Can you expand on the connection with Leverage Points? Seems like 12 Leverage Points is an extremely specific and complicated idea (doesn’t mean it can’t be good in its own field, though).
I see the 12 points as possible trailheads for analyzing D when the person is new to the type of analysis and needs examples to chain off of.