There are nonzero transaction costs to specifying your price in the first place.
This is probably too complicated to explain to the general population.
In practice the survey-giver doesn’t have unbounded bankroll so they’ll have to cap payouts at some value and give up on survey-takers who quote prices that are too high. I think it’s fine if they do this dynamically based on how much they’ve had to spend so far?
You can tweak the function from stated price to payment amount and probability of selection here—eg one thing you can do is collect data with probability proportional to Y−k for k>1 and pay them kYk−1. I haven’t thought a lot about which functions have especially good properties here; it might be possible to improve substantially on the formula I gave here.
You might, as a survey-giver, end up in a situation where you have more than enough datapoints from $10-value respondents and you care about getting high-value-of-time datapoints by a large margin, like you’d happily pay $1000 to get a random $100-value-of-time respondent but you need to pay $5000 in total rewards to get enough people that you locate a $100-value-of-time respondent who you end up actually selecting.
Intuitively it feels like there should be some way for you to just target these high-value-of-time respondents, but I think this is fundamentally kind of impossible? Let’s suppose 5% of respondents hate surveys, and disvalue the survey at $100, while the rest don’t mind surveys and would do it for free. Any strategy which ends up collecting a rich respondent has to pay them at least $100, which means that in order to distinguish such respondents you have to make it worth a survey-liker’s time to not pretend to be a survey-hater. So every survey-liker needs to be paid at least $100*p(you end up surveying someone who quotes you $100) in expectation, which means you have to dole out at least $1900 to survey-likers before you can get your survey-hater datapoint.
Having said the above though, I think it might be fair game to look for observed patterns in time value from data you already have (eg maybe you know which zip codes have more high-value-of-time respondents) and disproportionately target those types of respondents? But now you’re introducing new sources of bias into your data, which you could again correct with stochastic sampling and inverse weighting, and it’d be a question of which source of bias/noise you’re more worried about in your survey.
Assorted followup thoughts:
There are nonzero transaction costs to specifying your price in the first place.
This is probably too complicated to explain to the general population.
In practice the survey-giver doesn’t have unbounded bankroll so they’ll have to cap payouts at some value and give up on survey-takers who quote prices that are too high. I think it’s fine if they do this dynamically based on how much they’ve had to spend so far?
You can tweak the function from stated price to payment amount and probability of selection here—eg one thing you can do is collect data with probability proportional to Y−k for k>1 and pay them kYk−1. I haven’t thought a lot about which functions have especially good properties here; it might be possible to improve substantially on the formula I gave here.
You might, as a survey-giver, end up in a situation where you have more than enough datapoints from $10-value respondents and you care about getting high-value-of-time datapoints by a large margin, like you’d happily pay $1000 to get a random $100-value-of-time respondent but you need to pay $5000 in total rewards to get enough people that you locate a $100-value-of-time respondent who you end up actually selecting.
Intuitively it feels like there should be some way for you to just target these high-value-of-time respondents, but I think this is fundamentally kind of impossible? Let’s suppose 5% of respondents hate surveys, and disvalue the survey at $100, while the rest don’t mind surveys and would do it for free. Any strategy which ends up collecting a rich respondent has to pay them at least $100, which means that in order to distinguish such respondents you have to make it worth a survey-liker’s time to not pretend to be a survey-hater. So every survey-liker needs to be paid at least $100*p(you end up surveying someone who quotes you $100) in expectation, which means you have to dole out at least $1900 to survey-likers before you can get your survey-hater datapoint.
Having said the above though, I think it might be fair game to look for observed patterns in time value from data you already have (eg maybe you know which zip codes have more high-value-of-time respondents) and disproportionately target those types of respondents? But now you’re introducing new sources of bias into your data, which you could again correct with stochastic sampling and inverse weighting, and it’d be a question of which source of bias/noise you’re more worried about in your survey.