Awesome! I’ve got a pretty full couple of days, but should have a sketch by sometime on the weekend.
And I say it because of a mix of filling in the application (which is heavy duty in a few ways which kinda makes sense for orgs, but not really for an individual), and the way s-process evaluations don’t neatly fit checking a dozens of additional applications which require lots of technical reading to assess. You kinda want a way to scalably use existing takes on research value by people you trust somewhat but aren’t full recommenders, like many other funders use, rather than assessment via recommenders who have scarce time.
(you have much more vision into s-process than me, I’ve been keen to get a better sense of it for a couple years and if there’s sharable docs/screenshots of the software I’d be happy to become better informed and less likely to have my suggestions miss)
which is heavy duty in a few ways which kinda makes sense for orgs, but not really for an individual
This makes sense, though it’s certainly possible to get funded as an individual. Based on my quick count there were ~four individuals funded this round.
You kinda want a way to scalably use existing takes on research value by people you trust somewhat but aren’t full recommenders
Speculation grants basically match this description. One possible difference is that there’s an incentive for speculation granters to predict what recommenders in the round will favor (though they make speculation grants without knowing who’s going to participate as a recommender). I’m curious for your take.
I’ve been keen to get a better sense of it
It’s hard to get a good sense without seeing it populated with data, but I can’t share real data (and I haven’t yet created good fake data). I’ll try my best to give an overview though.
Recommenders start by inputting two pieces of data: (a) how interested they are in investigating each proposal, and (b) disclosures of any any possible conflicts of interest, so that other recommenders can vote on whether they should be recused or not.
They spend most of the round using this interface, where they can input marginal value function curves for the different orgs. They can also click on an org to see info about it (all of the info from the application form, which in my example is empty) and notes (both their notes and other recommenders’).
The MVF graph says how much they believe any given dollar is worth. We force curves to be non-increasing, so marginal value never goes up. On my graph you can see the shaded area visualizing how much money is allocated to the different proposals as we sweep down the graph from the highest marginal value.
There are also various comparison tools, the most popular of which is the Sankey chart which shows how much money flows from the funder, through the different recommenders, to different orgs. The disagreements matrix is one of the more useful tools: it shows areas where recommenders disagree the most, which helps them figure out what to talk about in their meetings.
If you’re interested in the algorithm more than the app, I have a draft attempting to explain it.
You kinda want a way to scalably use existing takes on research value by people you trust somewhat but aren’t full recommenders
Speculation grants basically match this description. One possible difference is that there’s an incentive for speculation granters to predict what recommenders in the round will favor (though they make speculation grants without knowing who’s going to participate as a recommender). I’m curious for your take.
Speculation grants were a great addition! However, applying for a speculation grant still commits the S-process to doing a full evaluation, along with the heavy application process on the user side. I think this can be streamlined a fair amount without losing quality of evaluation, draft of proposal started :)
Thanks for all the extra info on the s-process, this helps clarify my thinking!
Do you say this because of the overhead of filling out the application?
I’m interested in hearing about it. Doesn’t have to be that polished, just enough to get the idea.
(For context, I work on the S-Process)
Awesome! I’ve got a pretty full couple of days, but should have a sketch by sometime on the weekend.
And I say it because of a mix of filling in the application (which is heavy duty in a few ways which kinda makes sense for orgs, but not really for an individual), and the way s-process evaluations don’t neatly fit checking a dozens of additional applications which require lots of technical reading to assess. You kinda want a way to scalably use existing takes on research value by people you trust somewhat but aren’t full recommenders, like many other funders use, rather than assessment via recommenders who have scarce time.
(you have much more vision into s-process than me, I’ve been keen to get a better sense of it for a couple years and if there’s sharable docs/screenshots of the software I’d be happy to become better informed and less likely to have my suggestions miss)
This makes sense, though it’s certainly possible to get funded as an individual. Based on my quick count there were ~four individuals funded this round.
Speculation grants basically match this description. One possible difference is that there’s an incentive for speculation granters to predict what recommenders in the round will favor (though they make speculation grants without knowing who’s going to participate as a recommender). I’m curious for your take.
It’s hard to get a good sense without seeing it populated with data, but I can’t share real data (and I haven’t yet created good fake data). I’ll try my best to give an overview though.
Recommenders start by inputting two pieces of data: (a) how interested they are in investigating each proposal, and (b) disclosures of any any possible conflicts of interest, so that other recommenders can vote on whether they should be recused or not.
They spend most of the round using this interface, where they can input marginal value function curves for the different orgs. They can also click on an org to see info about it (all of the info from the application form, which in my example is empty) and notes (both their notes and other recommenders’).
The MVF graph says how much they believe any given dollar is worth. We force curves to be non-increasing, so marginal value never goes up. On my graph you can see the shaded area visualizing how much money is allocated to the different proposals as we sweep down the graph from the highest marginal value.
There are also various comparison tools, the most popular of which is the Sankey chart which shows how much money flows from the funder, through the different recommenders, to different orgs. The disagreements matrix is one of the more useful tools: it shows areas where recommenders disagree the most, which helps them figure out what to talk about in their meetings.
If you’re interested in the algorithm more than the app, I have a draft attempting to explain it.
Speculation grants were a great addition! However, applying for a speculation grant still commits the S-process to doing a full evaluation, along with the heavy application process on the user side. I think this can be streamlined a fair amount without losing quality of evaluation, draft of proposal started :)
Thanks for all the extra info on the s-process, this helps clarify my thinking!