LW/CFAR should develop a rationality curriculum for Elementary School Students. While the Sequences are a great start for adults and precocious teens with existing sympathies to the ideas presented therein, there’s very little in the way of rationality training accessible to (let alone intended for) children.
AABoyles
I have converted Rationality Abridged to EPUB and MOBI formats. The code to accomplish this is stored in this repository.
In July I started a Caloric Restriction Diet, fasting for an entire (calendar) day twice weekly. I did this out of a desire for the potential longevity benefits, but since then it’s had a rather happy (albeit utterly predictable) side-effect: I lost 10 pounds!
Impact concerns notwithstanding, there are some practical constraints: Elon Musk and Sergey Brin are naturalized US Citizens, which makes them ineligible to serve as US President.
We might not want to draw that tick mark just yet. Our other “Global Eradication Target”, Polio, has dropped into the 10^3 range of annual cases several times. The New York Times likened beating those last few cases to “Trying to squeeze Jell-O to death.” Not that humanity doesn’t deserve a collective pat on the back, but let’s not call the job done until the job is done.
The universe we perceive is probably a simulation of a more complex Universe. In breaking with the simulation hypothesis, however, the simulation is not originated by humans. Instead, our existence is simply an emergent property of the physics (and stochasticity) of the simulation.
Anything sufficiently far enough away from you is causally isolated from you. Because of the fundamental constraints of physics, information from there can never reach here, and vice versa. you may as well be in separate universes.
The performance of AlphaGo got me thinking about algorithms we can’t access. In the case of AlphaGo, we implemented the algorithm (AlphaGo) which discovered some strategies we could never have created. (Go Master Ke Jie famously said “I would go as far as to say not a single human has touched the edge of the truth of Go.”)
Perhaps we can imagine a sort of “logical causal isolation.” An algorithm is logically causally isolated from us if we cannot discover it (e.g. in the case of the Go strategies that AlphaGo used) and we cannot specify an algorithm to discover it (except by random accident) given finite computation over a finite time horizon (i.e. in the lifetime of the observable universe).
Importantly, we can devise algorithms which search the entire space of algorithms (e.g.
generate all permutations all possible strings of bits less than length n as n approaches infinity
), but there’s little reason to expect that such a strategy will result in any useful outputs of some finite length (there appear to be enough atoms in the universe () to represent all possible algorithms of length .There’s one important weakness in LCI (that doesn’t exist in Physical Causal Isolation). We can randomly jump to algorithms of arbitrary lengths. This stipulation gives us the weird ability to pull stuff from outside our LCI-cone into it. Unfortunately, we cannot do so with the expectation of arriving at a useful algorithm. (There’s an interesting question about which I haven’t yet thought about the distribution of useful algorithms of a given length.) Hence we must add the caveat to our definition of LCI “except by random accident.”
We aren’t LCI’d from the strategies AlphaGo used, because we created AlphaGo and AlphaGo discovered those strategies (even if human Go masters may never have discovered them independently). I wonder what algorithms exist beyond not just our horizons, but the horizons of all the algorithms which descend from everything we are able to compute.
- 16 Apr 2024 8:41 UTC; 3 points) 's comment on shortplav by (
- 29 May 2020 0:19 UTC; 1 point) 's comment on __nobody’s Shortform by (
Nobody wants to hear that you will try your best. It is the wrong thing to say. It is like saying “I probably won’t hit you with a shovel.” Suddenly everyone is afraid you will do the opposite.
--Lemony Snicket, All the Wrong Questions
This research doesn’t imply the non-existence of a Great Filter (contra this post’s title). If we take the Paper’s own estimates, there will be approximately 10^20 terrestrial planets in the Universe’s history. Given that they estimate the Earth preceded 92% of these, there currently exist approximately 10^19 terrestrial planets, any one of which might have evolved intelligent life. And yet, we remain unvisited and saturated in the Great Silence. Thus, there is almost certainly a Great Filter.
“So, I threw a party for time travelers last week. It was awful. Nobody showed up.”
“Well, Did you invite anyone?”
“Not yet.”
Backed. Thanks Rick! I can’t wait to hear these things if they actually get created.
Also See Also: Simulate and defer to more rational selves.
This is actually a fairly healthy field of study. See, for example, Nonphotosynthetic Pigments as Potential Biosignatures.
It may have discovered some property of physics which enabled it to expand more efficiently across alternate universes, rather than across space in any given universe. Thus it would be unlikely to colonize much of any universe (specifically, ours).
Excellent, Thank you!
Done. Looking forward to seeing your results!
If you haven’t visited the page, don’t. It isn’t worth your time.
Hi Everyone! I’m AABoyles (that’s true most places on the internet besides LW).
I first found LW when a colleague mentioned That Alien Message over lunch. I said something to the effect of “That sounds like an Arthur C. Clarke short story. Who is the author?” “Eliezer Yudkowsky,” He said, and sent me the link. I read it, and promptly forgot about it. Fast forward a year, and another friend posts the link to HPMOR on Facebook. The author’s name sounded very familiar. I read it voraciously. I subscribed to the Main RSS feed and lurked for a year.
I joined the community last month because I wanted to respond to a specific discussion, but I’ve been having a lot of fun since I got here. I’m interested in finding ways to achieve the greatest good (read: reducing the number of lost Disability Adjusted Life Years), including Effective Altruism and Global Catastrophic Risk Reduction.
It occurs to me that the world could benefit from more affirmative fact checker. Existing fact checkers are appropriately rude to people who publicly make false claims, but there’s not much in the way of celebration of people who make difficult true claims. For example, Politifact awards “Pants on Fire” for bald lies, but only “True” for bald truths. I think there should be an even higher-status classification for true claims that run counter to the interests of the speaker. For example, we could award “Bayesian Stars” to figures who publicly update on new evidence, or “Bullets Bitten” to public figures who promulgate true evidence that weakens their arguments.
I have taken the survey, and can’t wait to see the results on the calibration questions. Post-hoc self-assessment suggests I have a long way to go...