Today I encountered a real-life account of a the chain story — involving a cow rather than an elephant — around 24:10 into the “Best of BackStory, Vol. 1” episode of the podcast BackStory.
lukeprog
“Accuracy-boosting” or “raising accuracy”?
Source. But the non-cached page says “The details of this job cannot be viewed at this time,” so maybe the job opening is no longer available.
FWIW, I’m a bit familiar with Dafoe’s thinking on the issues, and I think it would be a good use of time for the right person to work with him.
Hi Rick, any updates on the Audible version?
Just donated!
Hurray!
Any chance you’ll eventually get this up on Audible? I suspect that in the long run, it can find a wider audience there.
Another attempt to do something like this thread: Viva la Books.
I guess subjective logic is also trying to handle this kind of thing. From Jøsang’s book draft:
Subjective logic is a type of probabilistic logic that allows probability values to be expressed with degrees of uncertainty. The idea of probabilistic logic is to combine the strengths of logic and probability calculus, meaning that it has binary logic’s capacity to express structured argument models, and it has the power of probabilities to express degrees of truth of those arguments. The idea of subjective logic is to extend probabilistic logic by also expressing uncertainty about the probability values themselves, meaning that it is possible to reason with argument models in presence of uncertain or incomplete evidence.
Though maybe this particular formal system has really undesirable properties, I don’t know.
Donated $300.
Never heard of him.
For those who haven’t been around as long as Wei Dai…
Eliezer tells the story of coming around to a more Bostromian view, circa 2003, in his coming of age sequence.
Just FYI, I plan to be there.
Any idea when the book is coming out?
Just FYI to readers: the source of the first image is here.
I don’t know if this is commercially feasible, but I do like this idea from the perspective of building civilizational competence at getting things right on the first try.
Might you be able to slightly retrain so as to become an expert on medium-term and long-term biosecurity risks? Biological engineering presents serious GCR risk over the next 50 years (and of course after that, as well), and very few people are trying to think through the issues on more than a 10-year time horizon. FHI, CSER, GiveWell, and perhaps others each have a decent chance of wanting to hire people into such research positions over the next few years. (GiveWell is looking to hire a biosecurity program manager right now, but I assume you can’t acquire the requisite training and background immediately.)
I think it’s partly not doing enough far-advance planning, but also partly just a greater-than-usual willingness to Try Things that seem like good ideas even if the timeline is a bit rushed. That’s how the original minicamp happened, which ended up going so well that it inspired us to develop and launch CFAR.
Fixed, thanks.