Updated, thanks!
lukeprog
Fixed, thanks.
Maybe just use odds ratios. That’s what I use when I’m trying to make updates on the spot.
Working on MIRI’s current technical agenda mostly requires a background in computer science with an unusually strong focus on logic: see details here. That said, the scope of MIRI’s research program should be expanding over time. E.g. see Patrick’s recent proposal to model goal stability challenges in a machine learning system, which would require more typical AI knowledge than has usually been the case for MIRI’s work so far.
MIRI’s research isn’t really what a mathematician would typically think of as “math research” — it’s more like theory-heavy computer science research with an unusually significant math/logic component, as is the case with a few other areas of computer science research, e.g. program analysis.
Also see the “Our recommended path for becoming a MIRI research fellow” section on our research fellow job posting.
Fixed, thanks!
I tried this earlier, with Great Explanations.
I can’t mail that address, I get a failure message from Google
Oops. Should be fixed now.
Thanks! BTW, I’d prefer to have 1% and 0.1% and 99% and 99.9% as options, rather than skipping over the 1% and 99% options as you have it now.
Fair enough. I’ve edited my original comment.
(For posterity: the text for my original comment’s first hyperlink originally read “0 and 1 are not probabilities”.)
Which is roughly the length of War and Peace or Atlas Shrugged.
0% probability is my most common answer as well, but I’m using it less often than I was choosing 50% on the CFAR calibration app (which forces a binary answer choice rather than an open-ended answer choice). The CFAR app has lots of questions like “Which of these two teams won the Superbowl in 1978” where I just have no idea. The trivia database Nanashi is using has, for me, a greater proportion of questions on which my credence is something more interesting than an ignorance prior.
I’d prefer not to allow 0 and 1 as available credences. But if 0 remained as an option I would just interpret it as “very close to 0” and then keep using the app, though if a future version of the app showed me my Bayes score then the difference between what the app allows me to choose (0%) and what I’m interpreting 0 to mean (“very close to 0″) could matter.
Awesome!
I’ve been dying for something like this after I zoomed through all the questions in the CFAR calibration app.
Notes so far:
The highest-available confidence is 99%, so the lowest-available confidence should be 1% rather than 0%. Or even better, you could add 99.9% and 0.1% as additional options.
So far I’ve come across one question that was blank. It just said Category: jewelry and then had no other text. Somehow the answer was Ernest Hemingway.
Would be great to be able to sign up for an account so I could track my calibration across multiple sessions.
Huge kudos to you for putting this together, Habryka!
Besides Superintelligence, the latest “major” publication on the subject is Yudkowsky’s Intelligence explosion microeconomics. There are also a few articles related to the topic at AI Impacts.
In descending order of importance:
Instrumental rationality.
How to actually practice becoming more rational.
Rationality via cluster thinking and intelligent imitation.
I think Peter Singer wrote a paper arguing “no,” but I can’t find it at the moment.
Marblestone et al., “Physical principles for scalable neural recording.”
It was so bright out! The photo has my eyes completely closed, unfortunately. :)
People have complained about Sumatra not working with MIRI’s PDF ebooks, too. It was hard enough already to get our process to output the links we want on most readers, so we decided not to make the extra effort to additionally support Sumatra. I’m not sure what it would take.