Karma: 70

[Question] I want to donate some money (not much, just what I can af­ford) to AGI Align­ment re­search, to what­ever or­ga­ni­za­tion has the best chance of mak­ing sure that AGI goes well and doesn’t kill us all. What are my best op­tions, where can I make the most differ­ence per dol­lar?

2 Aug 2022 12:08 UTC
14 points

What are these “out­side of the Over­ton win­dow” ap­proaches to pre­vent­ing AI apoc­a­lypse that Eliezer was talk­ing about in his post?

14 Jun 2022 21:18 UTC
2 points

[Question] How would you ex­plain Bayesian think­ing to a ten year old?

5 Jan 2022 17:25 UTC
7 points
• Looks amazing!

I’d love to buy an ebook version though. Or even better—an audiobook.

• I don’t understand, its factors are 101 and 109, both are more than 100.

• While I’d rather not test this empirically, I think I’m feeling pretty motivated to do this, and yet I can’t. I’d really like to solve this issue without resorting to hiring a professional assassin on myself.

[Question] How do you write origi­nal ra­tio­nal­ist es­says?

1 Dec 2021 8:08 UTC
22 points
• That poem was amazing.

How does a person factorize 11,009 in their head?

• You guys will probably find this Slate Star Codex post interesting:

https://​​slatestarcodex.com/​​2017/​​09/​​05/​​book-review-surfing-uncertainty/​​

Scott summarizes the Predictive Processing theory, explains it in a very accessible way (no math required), and uses it to explain a whole bunch of mental phenomena (attention, imagination, motor behavior, autism, schizophrenia, etc.)

Can someone ELI5/​TLDR this paper for me, explain in a way more accessible to a non-technical person?

- How does backprop work if the information can’t flow backwards?
- In Scotts post, he says that when lower-level sense data contradicts high-level predictions, high-level layers can override lower-level predictions without you noticing it. But if low-level sensed data has high confidence/​precision—the higher levels notice it and you experience “surprise”. Which one of those is equivalent to the backdrop error? Is it low-level predictions being overridden, or high-level layers noticing the surprise, or something else, like changing the connections between neurons to train the network and learn from the error somehow?