LW/CFAR should develop a rationality curriculum for Elementary School Students. While the Sequences are a great start for adults and precocious teens with existing sympathies to the ideas presented therein, there’s very little in the way of rationality training accessible to (let alone intended for) children.
AABoyles
Please Help Metaculus Forecast COVID-19
[Question] Are long-form dating profiles productive?
The Bentham Prize at Metaculus
I just increased my Altruistic Effectiveness and you should too
I have converted Rationality Abridged to EPUB and MOBI formats. The code to accomplish this is stored in this repository.
In July I started a Caloric Restriction Diet, fasting for an entire (calendar) day twice weekly. I did this out of a desire for the potential longevity benefits, but since then it’s had a rather happy (albeit utterly predictable) side-effect: I lost 10 pounds!
Impact concerns notwithstanding, there are some practical constraints: Elon Musk and Sergey Brin are naturalized US Citizens, which makes them ineligible to serve as US President.
We might not want to draw that tick mark just yet. Our other “Global Eradication Target”, Polio, has dropped into the 10^3 range of annual cases several times. The New York Times likened beating those last few cases to “Trying to squeeze Jell-O to death.” Not that humanity doesn’t deserve a collective pat on the back, but let’s not call the job done until the job is done.
The universe we perceive is probably a simulation of a more complex Universe. In breaking with the simulation hypothesis, however, the simulation is not originated by humans. Instead, our existence is simply an emergent property of the physics (and stochasticity) of the simulation.
There’s an Awesome AI Ethics List and it’s a little thin
Anything sufficiently far enough away from you is causally isolated from you. Because of the fundamental constraints of physics, information from there can never reach here, and vice versa. you may as well be in separate universes.
The performance of AlphaGo got me thinking about algorithms we can’t access. In the case of AlphaGo, we implemented the algorithm (AlphaGo) which discovered some strategies we could never have created. (Go Master Ke Jie famously said “I would go as far as to say not a single human has touched the edge of the truth of Go.”)
Perhaps we can imagine a sort of “logical causal isolation.” An algorithm is logically causally isolated from us if we cannot discover it (e.g. in the case of the Go strategies that AlphaGo used) and we cannot specify an algorithm to discover it (except by random accident) given finite computation over a finite time horizon (i.e. in the lifetime of the observable universe).
Importantly, we can devise algorithms which search the entire space of algorithms (e.g.
generate all permutations all possible strings of bits less than length n as n approaches infinity
), but there’s little reason to expect that such a strategy will result in any useful outputs of some finite length (there appear to be enough atoms in the universe () to represent all possible algorithms of length .There’s one important weakness in LCI (that doesn’t exist in Physical Causal Isolation). We can randomly jump to algorithms of arbitrary lengths. This stipulation gives us the weird ability to pull stuff from outside our LCI-cone into it. Unfortunately, we cannot do so with the expectation of arriving at a useful algorithm. (There’s an interesting question about which I haven’t yet thought about the distribution of useful algorithms of a given length.) Hence we must add the caveat to our definition of LCI “except by random accident.”
We aren’t LCI’d from the strategies AlphaGo used, because we created AlphaGo and AlphaGo discovered those strategies (even if human Go masters may never have discovered them independently). I wonder what algorithms exist beyond not just our horizons, but the horizons of all the algorithms which descend from everything we are able to compute.
- 16 Apr 2024 8:41 UTC; 3 points) 's comment on shortplav by (
- 29 May 2020 0:19 UTC; 1 point) 's comment on __nobody’s Shortform by (
Nobody wants to hear that you will try your best. It is the wrong thing to say. It is like saying “I probably won’t hit you with a shovel.” Suddenly everyone is afraid you will do the opposite.
--Lemony Snicket, All the Wrong Questions
This research doesn’t imply the non-existence of a Great Filter (contra this post’s title). If we take the Paper’s own estimates, there will be approximately 10^20 terrestrial planets in the Universe’s history. Given that they estimate the Earth preceded 92% of these, there currently exist approximately 10^19 terrestrial planets, any one of which might have evolved intelligent life. And yet, we remain unvisited and saturated in the Great Silence. Thus, there is almost certainly a Great Filter.
“So, I threw a party for time travelers last week. It was awful. Nobody showed up.”
“Well, Did you invite anyone?”
“Not yet.”
Backed. Thanks Rick! I can’t wait to hear these things if they actually get created.
Another Case Study of Inadequate Equilibrium in Medicine
Also See Also: Simulate and defer to more rational selves.
I have taken the survey, and can’t wait to see the results on the calibration questions. Post-hoc self-assessment suggests I have a long way to go...