I agree that ranking the weights from 1 to N is idiotic because it doesn’t respect the relative importance of each characteristic. However, changing the ratings from 101-110 for every scale will just add a constant to each option’s value:
Option A, strength 103, mass 106, total score 2(103) + 106 = 312
Option B, strength 105, mass 103, total score 2(105) + 103 = 313
(I changed ‘weight to ‘mass’ to avoid confusion with the other meaning of ‘weight’)
Using something approximating a real-valued ranking (rank from 1-10) instead of rank indicies reduces the problem to mere nonlinearity.
I assume you mean using values for the weights that correspond to importance, which isn’t necessarily 1-10. For instance, if strength is 100 times more important than mass, we’d need to have weights of 100 and 1.
You’re right that this assumes that the final quality is a linear function of the component attributes: we could have a situation where strength becomes less important when mass passes a certain threshold, for instance. But using a linear approximation is often a good first step at the very least.
Option A, strength 103, mass 106, total score 2(103) + 106 = 312
Option B, strength 105, mass 103, total score 2(105) + 103 = 313
Oops, I might have to look at that more closely. I think you are right. The shared offset cancels out.
I assume you mean using values for the weights that correspond to importance, which isn’t necessarily 1-10. For instance, if strength is 100 times more important than mass, we’d need to have weights of 100 and 1.
Using 100 and 1 for something that is 100 times more important is correct (assuming you are able to estimate the weights (100x is awful suspicious)). Idiot procedures were using rank indicies, not real-valued weights.
But using a linear approximation is often a good first step at the very least.
agree. Linearlity is a valid assumption
The error is using uncalibrated rating from 0-10, or worse, rank indicies. Linear valued rating from 0-10 has the potential to carry the information properly, but that does not mean people can produce calibrated estimates there.
I agree that ranking the weights from 1 to N is idiotic because it doesn’t respect the relative importance of each characteristic. However, changing the ratings from 101-110 for every scale will just add a constant to each option’s value:
Option A, strength 103, mass 106, total score 2(103) + 106 = 312
Option B, strength 105, mass 103, total score 2(105) + 103 = 313
(I changed ‘weight to ‘mass’ to avoid confusion with the other meaning of ‘weight’)
I assume you mean using values for the weights that correspond to importance, which isn’t necessarily 1-10. For instance, if strength is 100 times more important than mass, we’d need to have weights of 100 and 1.
You’re right that this assumes that the final quality is a linear function of the component attributes: we could have a situation where strength becomes less important when mass passes a certain threshold, for instance. But using a linear approximation is often a good first step at the very least.
Remember that whenever you want a * for multiplying numbers together, you need to write \*.
Oops, I might have to look at that more closely. I think you are right. The shared offset cancels out.
Using 100 and 1 for something that is 100 times more important is correct (assuming you are able to estimate the weights (100x is awful suspicious)). Idiot procedures were using rank indicies, not real-valued weights.
agree. Linearlity is a valid assumption
The error is using uncalibrated rating from 0-10, or worse, rank indicies. Linear valued rating from 0-10 has the potential to carry the information properly, but that does not mean people can produce calibrated estimates there.