An update on this: sadly I underestimated how busy I would be after posting this bounty. I spent 2h reading this and Thomas post the other day, but didn’t not manage to get into the headspace of evaluating the bounty (i.e. making my own interpretation of John’s post, and then deciding whether Thomas’ distillation captured that). So I will not be evaluating this. (Still happy to pay if someone else I trust claim Thomas’ distillation was sufficient.) My apologies to John and Thomas about that.
jacobjacob
Sorry for late reply: no, it does not.
Cool, I’ll add $500 to the distillation bounty then, to be paid out to anyone you think did a fine job of distilling the thing :) (Note: this should not be read as my monetary valuation for a day of John work!)
(Also, a cooler pay-out would be basis points, or less, of Wentworth impact equity)
Thanks for being open to suggestions :) Here’s one: you could award half the prize pool to compelling arguments against AI safety. That addresses one of John’s points.
For example, stuff like “We need to focus on problems AI is already causing right now, like algorithmic fairness” would not win a prize, but “There’s some chance we’ll be better able to think about these issues much better in the future once we have more capable models that can aid our thinking, making effort right now less valuable” might.
How long would it have taken you to do the distillation step yourself for this one? I’d be happy to post a bounty, but price depends a bit on that.
Bought the hat. I will wear it with pride.
Curious if you have suggestions for a replacement term for “grabby” that you’d feel better about?
“likely evolved via natural selection”
My default expectation would be that it’s a civilization descended from an unaligned AGI, so I’m confused why you believe this is likely.
A guess: you said you’re optimistic about alignment by default—so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
That’s awesome, I didn’t know this.
I found a low six figure donor who filled 25% of their funding gap.
The rest has not been filled.EDIT: They also got an ACX grant, but not enough to fill the whole funding gap I believe. I can intro donors who want to fill it to the radvac team. Email me at jacob at lesswrong.comOriginal thread and credit to ChristianKl who picked up on and alerted me to rumours of the funding gap: https://www.lesswrong.com/posts/fBGzge5i4hfbaQZWy/usd1000-bounty-how-effective-are-marginal-vaccine-doses?commentId=XwA8mtvK8YCob2pLK#comments
This will be a bit of a disappointing answer (sorry in advance), but I indeed think UI-space is pretty high-dimensional and that there are many things you can do that aren’t just “remove options for all users”. Sadly, the best I way I know of how to implement this is to just do it myself and show the result; and I cannot find the time for that this week.
I also tried and failed to get my family to use it :( Among other things, I think they bounced off particularly hard on the massive drop-down of 10 different risk categories of ppl and various levels of being in a bubble.
I don’t think the blocker here was fundamentally quantitative—they think a bunch about personal finance and budgeting, so that metaphor made sense to them (and I actually expect this to be true for a lot of non-STEM ppl). Instead, I think UX improvements could go a long way.
I voted disagree, because at this point there have been plenty of COVID forecasting tournaments hosted by Good Judgement, Metaculus and several 3rd parties. Metaculus alone has 400 questions in the COVID category, a lot of which have 100+ predictions. I personally would find it quite easy to put together a group of forecasters with legibly good track record on COVID, but from working in this space I also do have a sense of where to start looking and who to ask.
This has now resolved false.
Do you mean that if one would like to go to such a bootcamp but thinks they won’t be able to get visa in time, they should apply now to get invited to a future cohort?
Apply now—there’s a question at the end asking if you’re interested in future cohorts. And you can say in the final textbox that you’re not available for Round 1.
You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don’t know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it’s some evidence it’s the same worm. To make it more granular there’s a lot of learning tasks from behavioural neuroscience you could adapt.
You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a particular abstraction. Upload worm; check that the neuron still corresponds to the abstraction.
Or ablation studies: you selectively impair certain neurons in the live trained worm, uploaded worm, and control worms, in a way that causes the same behaviour change only in the target individual.
Or you can get causal evidence via optogenetics.
Jaan/Holden convo link is broken :(
Following up: as a result of this thread, radvac will likely get a $100k donation (from a donor who was not considering them before). This does not fill their funding needs however, and they’re looking to raise another $300k this year.
For any interested funders, PM me and I can share detailed call notes.
I’ll pay $425 for this answer, will PM you for payment details.
This argument does not seem to me like it captures the reason a rock is not an optimiser?
I would hand wave and say something like:
“If you place a human into a messy room, you’ll sometimes find that the room is cleaner afterwards. If you place a kid in front of a bowl of sweets, you’ll soon find the sweets gone. These and other examples are pretty surprising state transitions, that would be highly unlikely in the absence of those humans you added. And when we say that something is an optimiser, we mean that it is such that, when it interfaces with other systems, it tends to make a certain narrow slice of state space much more likely for those systems to end up in.”
The rock seems to me to have very few such effects. The probability of state transitions of my room is roughly the same with or with out a rock in a corner of it. And that’s why I don’t think of it as an optimiser.