Think it would be high-impact or fun to meet? Book a 15-minute slot here t.ly/vwOU
KatWoods
How and why to turn everything into audio
Meditation course claims 65% enlightenment rate: my review
Four reasons I find AI safety emotionally compelling
Good catch! Yeah, I’m switching to .org instead of .co and the re-direct link is currently not working for some obscure reason I’m working on. In the meantime, I’ve updated the link and this is the new one here http://www.katwoods.org/home/june-14th-2019
New: use The Nonlinear Library to listen to the top LessWrong posts of all time
I also wonder about this. If I’m understanding the post and comment right, it’s that if you don’t formulate it mathematically, it doesn’t generalize robustly enough? And that to formulate something mathematically you need to be ridiculously precise/pedantic?
Although this is probably wrong and I’m mostly invoking Cunningham’s Law
Thank you! This clarifies a lot. The dialogue was the perfect blend of entertaining and informative.
I might see if you can either include it in the original post or post it as a separate one, because it really helps fill in the rationale.
Regardless of the exact starting point, seekers of “True Names” quickly find themselves recursing into a search for “True Names” of lower-level components of agency, like:
Optimization
Goals
World models
Abstraction
This is the big missing piece for me. Could you elaborate on how you go from trying to find the True Names of human values to things like what is an agent, abstraction, and embeddedness?
Goals makes sense, but the rest are not obvious why they’d be important or relevant. I feel like this reasoning would lead you to thinking about meta-ethics or something, not embeddedness and optimization.
I suspect I’m missing a connecting piece here that would make it all click.
Thank you so much for writing this! The community definitely needed this. This clarifies the motivation for agent foundations so much better than anything else I’ve read on it.
Should definitely be the go-to introduction to it and be in the AGI Safety Fundamentals course.
So glad to hear it! I don’t use any particular resource. Just the general principle of generate the feeling of loving-kindness on something easy for you, then maintain that emotion while thinking of something that’s slightly harder to feel loving-kindness towards, then slowly level up, until you’re working on people really hard for you.
Good luck! Would love to hear how it goes :)
Thanks for the suggestions!
We’re working on setting up forum-specific channels. There’s just a technical issue we’re working on, then it’ll happen.
Check out Tayl! It does a little what you’re looking for. And you can also use @VoiceReader on Android and NaturalReader on Chrome.
Interesting idea! We want this to stay automated so we can focus on other projects and this seems likely to be hard to automate. Will look into it though!
This is probably the most requested feature! We’re working on it, but it’s proving more difficult than we’d expected. Stay tuned.
In part inspired by this post, I did one hour a day of loving-kindness meditation for ten days and the results were phenomenal. It’s too soon to tell if it’ll stick, but I think it’s fixed about 80% of my impostor syndrome and anxiety around impact, which have been a major source of stress for me for years.
I’ve tried everything before, like CBT, ACT, concentration practice, IFS, exercise, therapy, etc etc. Nothing had worked. And this has been by far the most successful thing I’ve tried.
Will be writing about it in more detail on LessWrong when I write the review about the Finder’s Course in a few weeks. Thank you so much for writing this article that gave me the extra push and framework I needed.
So agree with this, and I have another option to add to the list: where you evaluate the net value of an animal’s life individually. Because some animals seem clearly net negative (e.g. caged egg-laying hens) and others seem very plausibly net positive (beef cattle).
Because of this reasoning, I myself am no longer vegetarian/vegan, despite being vegan since I was 17 and vegetarian since I was 12 (so around 18 years veg). Instead, I eat cow products because my investigations into their lives make me feel decently confident that they’re net positive. I still never eat anything from factory-farmed chickens, fish, or pigs, including eggs, because my research shows them to be very probably net negative. I’ll also eat wild-caught fish because their lives up until death were unaffected by my actions and their death, while bad, would have happened anyways, and most ways of dying are bad, and I don’t think that human-caused death for fish is much worse on average than the counterfactual*.
Also, you mention coordination and what’s a good thing to promote in the community. I think that for the public as a whole, my approach is too complicated. However, in the EA / rationalist sphere, I think this method of making an internal model of animals could actually be far more popular and effective. Being a vegan except for cow products is a loooot easier of a change to make than giving up cheese and all meat.
*Of note, I’m the least confident about wild-caught fish and dairy. I think the thing that’s most likely to destroy my reasoning there is that I’m secretly sneaking in speciesism. Like, would I do the same for humans, controlling for flow-through effects?
Thank you!
You can listen to this and all the other Yudkowsky & Ngo/Christiano conversations in podcast form on the Nonlinear Library now.
You can also listen to them on any podcast player. Just look up Nonlinear Library.
I’ve listened to them as is and I find it pretty easy to follow, but if you’re interested in making it even easier for people to follow, these fine gentlemen have put up a ~$230 RFP/bounty for anybody who turns it into audio where each person has a different voice.
It would probably be easiest to just do it on our platform, since there’s a relatively easy way to change the voices, it will just be a tedious ~1-4 hours of work. My main bottleneck is management time, so I don’t have the time to manage the process or choose somebody who I’d trust to do it without messing with the quality.
It does seem a shame though, to have something so close to being even better, and not let people do what clearly is desired, because of my worry of accidentally messing up the quality of the audio. I think the main thing is just being conscientious enough to do 1-4 hours of repetitive work and an attention to detail.
After a couple minutes of thinking on it, I think a potential solution would be to have a super quick and dirty way to delegate trust. I’ll give you access to our platform to change the voices if you either a) are getting a/have a degree at an elite school (thus demonstrating a legible minimal amount of conscientiousness and ability to do boring tasks) or b) have at least 75 mutual EA friends with me on Facebook and can have an EA reference about your diligence.
Just DM me.
I’ll do it on a first come first serve basis.
If you do it with human voices, we’d also be happy to add that to the Library.
Finally, sorry for the delay. There was a comedy of errors where there was a bug in the system while I also came down with a human bug (a cold. Not covid :) ) and the articles were so long our regular system wasn’t working, so things weren’t automatic like usual.
To be fair, there was indeed something wrong with our version! It was so long it messed up our system and we’ve only now fixed it and it’s released in three parts. Along with the other Eliezer, Richard, and Paul conversations
Interesting. Didn’t know it had that feature.
That’s the opposite of what Boomerang is offering, though. It’s if there no reply that it’ll bump it back to your inbox. So if you ask for something and they don’t get back to you, you get notified so you can bump them again.
For example, I might say to somebody “Can you send me that spreadsheet we were talking about?” and say to boomerang to send this back to my inbox if they don’t get back to me by Monday. Then I can bump them and say “Hey, just checking in on this spreadsheet.”
With Boomerang you can make it so that it will only return the message to your inbox if the person hasn’t replied to it yet. So if you send an email saying “Hey, can you do X?” and they don’t respond within a week, it’ll come back to your inbox. If they do respond though, it won’t remind you later.
I use it for the usual shortcut keys I use all the time, so mostly: alt-tab, alt-q (from the chrome extension), ctrl+t, and ctrl+l (L as in llama)
Love this! Added it to our list of AI safety curricula, reading lists, and courses.
Thanks for sharing this.