Think it would be high-impact or fun to meet? Book a 20-minute slot here t.ly/vwOU
KatWoods
Why write more: improve your epistemics, self-care, & 28 other reasons
Announcing Nonlinear Emergency Funding
Thanks for writing this! Some other useful lists of resources:
AI Safety Support’s giant list of useful links. It’s got a lot of good stuff in there and stays pretty up to date
“AI safety resources and materials” tag on the EA Forum
“Collections and resources” tag on the EA Forum
“Collections and resources” tag on LessWrong
“List of links” tag on LessWrong
Woops. Thanks for pointing that out. Updated it
Should probably be merged with the Lists of Links tag.
If you want to learn technical AI safety, here’s a list of AI safety courses, reading lists, and resources
Two reasons we might be closer to solving alignment than it seems
Love this!
If the offer is still open, you might want to add it it to EA Houses so more people see it.
The Parable of the Boy Who Cried 5% Chance of Wolf
I’m not super into that, but I’ve heard good things from people about Otter.ai
How and why to turn everything into audio
Meditation course claims 65% enlightenment rate: my review
Four reasons I find AI safety emotionally compelling
Love this! Added it to our list of AI safety curricula, reading lists, and courses.
Thanks for sharing this.
Good catch! Yeah, I’m switching to .org instead of .co and the re-direct link is currently not working for some obscure reason I’m working on. In the meantime, I’ve updated the link and this is the new one here http://www.katwoods.org/home/june-14th-2019
New: use The Nonlinear Library to listen to the top LessWrong posts of all time
I also wonder about this. If I’m understanding the post and comment right, it’s that if you don’t formulate it mathematically, it doesn’t generalize robustly enough? And that to formulate something mathematically you need to be ridiculously precise/pedantic?
Although this is probably wrong and I’m mostly invoking Cunningham’s Law
Thank you! This clarifies a lot. The dialogue was the perfect blend of entertaining and informative.
I might see if you can either include it in the original post or post it as a separate one, because it really helps fill in the rationale.
Regardless of the exact starting point, seekers of “True Names” quickly find themselves recursing into a search for “True Names” of lower-level components of agency, like:
Optimization
Goals
World models
Abstraction
This is the big missing piece for me. Could you elaborate on how you go from trying to find the True Names of human values to things like what is an agent, abstraction, and embeddedness?
Goals makes sense, but the rest are not obvious why they’d be important or relevant. I feel like this reasoning would lead you to thinking about meta-ethics or something, not embeddedness and optimization.
I suspect I’m missing a connecting piece here that would make it all click.
Cool setup! For me, I always have a baggy that contains one of each of the following to keep it light: painkiller, earbud rubber tip (sucks to lose one), tissue, Pepto bismol tablet, caffeine pill, melatonin, earplugs, bandaid, meds, hair elastic, tampon, travel toothbrush, bit of floss for when something super annoying is stuck between my teeth, stain remover wipe
Can’t tell you how many times I felt like a hero for having a pepto bismol tablet or bandaid available.
The key is to set an alarm for refilling it whenever you take something out, otherwise it’s stops being as useful.