Interesting. I think of heuristics as being almost the same as cognitive biases. If it helps System 1, it’s a heuristic. If it gets in the way of System 2, it’s a cognitive bias.
Not a disagreement, just an observation that we are using language differently.
Regarding the first enigma, the expectation that what has worked in the past will work in the future is not a feature of the world, it’s a feature of our brains. That’s just how neural networks work, they predict the future based on past data.
Regarding the third enigma, ethical principles are not features of the world, they are parameters of our neural networks, however those parameters have been acquired.
Regarding the second enigma, I am less confident, but I think something similar is going on. Here my metaphor is not the ML branch of AI, but the symbolic processing branch of AI. Or System 2 rather than System 1, to use a different metaphor. Logic and math are not features of the world, but features of our brains.
Right, and if doing computer-generated sudokus is a kata for developing the heuristics for doing sudokus, then perhaps solving computer-generated logic problems could be a kata for developing the heuristics for rationality.
I do sudokus. These are computer-generated, and of consistent difficulty. so I can’t solve them from memory. Perhaps something similar could be done for math or logic problems, or story problems where cognitive biases work against the solutions.
Is gradient hacking a useful metaphor for human psychology? For example, peer pressure is a real thing. If I choose to spend time with certain people because I expect them to reinforce my behavior in certain ways, is that gradient hacking?
I have taken a few MOOCs and I agree with your assessment.
MOOCs are what they are. I see them as starting points, as building blocks. In the end, I’d rather take a free, dumbed-down intro MOOC from Andrew Ng at Stanford, than pay for an in-person, dumbed-down intro class from some clown at my local community college. At least there’s no sunk cost, so it’s easy to walk away if I lose interest.
An Einstein runs on pretty much the same hardware as the rest of us. If genetic engineering can get us to a planet full of Einsteins without running into hardware limitations, that may not qualify as an “intelligence explosion”, but it’s still a singularity in that we can’t extrapolate to the future on the other side.
Another thought… genetic engineering may be what will make us smart enough to build a safe AGI.
OK, good points. There is a spectrum here… if you live in a place where there’s a civil war every few years, then prepping for civil war makes a lot of sense. If you live in a place where the last civil war was 150 years ago, not so much.
CHAZ took place in a context where the most likely outcome was the failure of CHAZ, not the collapse of the larger society. CHAZ failed to prep for the obvious, if not the almost inevitable.
For things like hurricanes, one can look at the historical record, make a reasonable estimate, and do a prudent amount of prepping. For a societal collapse, there’s no data, so the estimate is based on a narrative. The narrative may be socially constructed, for example, a religious narrative about the End Times. Or it may be that prepping has become a hobby, and preppers talk to each other about their preps, and the guy that has 6 months of water and stored food gets more respect than the guy who has a week’s supply of water under his bed and whatever canned food is in his pantry. The difference is not really the utility functions, but the narratives and probability estimates that feed into the utility functions. The doomsday preppers are prepping more because they think doomsday is much more likely.
(I completely agree with your advice to store some water. I do the same. Over-prepping runs into diminishing returns, and not prepping at all is irresponsible, but a modest amount of prepping is a no-brainer.)
How do you distinguish between your having a good day, and your opponent having a bad day?
If you read a Wikipedia article and think it’s very problematic, take five minutes and write about why it’s problematic on the talk page of the article.
FYI, I did exactly that a couple of weeks ago, and nothing happened (yet, at least). No politically charged issues, just a simple conflation of two place names with similar spelling. I thought about splitting the one page into two and figuring out what other pages should link to them… and decided that there was probably someone much more qualified than I was, who would actually enjoy cleaning this up, and who just needed a little nudge on the Talk page.
I was thinking of #1. #2 applies both to genetic selection and cultural selection.
ADA is definitely a contender, but my concern is that they may be too slow. I’d rather own a few coins, and rebalance as things develop.
(I own some ADA, and added more on the recent dip, but I have more ETH than ADA.)
A modest suggestion: first, learn how to shoot. Something simple, like a .22 target pistol. Find someone who knows what they’re doing and ask them to teach you. Learn how to load it, how to stand, how to hold it, how to aim, how to pull the trigger. Feel the recoil. Practice at a target range. None of this is particularly complicated, but “gun” will no longer be an abstraction, it will be something tied to body memory.
Now, think about whether you want to own a gun.
Thank you for writing this up! This is also something I want to learn about. FYI, there is a book coming out in a couple of months:
Even cultural heritage may be seen as especially effective compression heuristics that are being passed down through generations.
“Especially effective” does not imply “beneficial to you as an individual”.
I like it. By all means, as long as we’re thinking about thinking, let’s think about how we label ourselves.
When I solve a sudoku, I typically make quick, incremental progress, then I get “stuck” for a while, then there is an insight, then I make quick, incremental progress until I finish. Not that there is anything profound about sudokus, but something like this might provide a controlled environment for studying insights. http://websudoku.com/ provides an endless supply of classic sudokus in 4 levels of difficulty. My experience is that the “Evil” level is consistently difficult. I have noticed that my being tired or distracted is enough to make one of these unsolvable.
You also discussed cross-discipline insights. There are sudoku variants, such as sudokus with knight’s-move constraints. Here my experience is that having recently worked on a sudoku variant tends to interfere with solving a classic sudoku. I also solve the occasional chess problem, but have not noticed any interaction with sudokus.
Instead of an either/or decision based on first principles, you might frame this as a “when” decision based on evidence. We’ve had about 4 months of real-world experience with the mRNA vaccines… if you wait another 4 months, that’s double the track record, and it’s always possible that new options will open up (say, a more traditional vaccine that’s more effective than J&J).
I would like to know which other ethical thought experiments have this pattern...
Isn’t the answer just “all of them”? The contrapositive of an implication is always true.
If (if X then Y) then (if ~Y then ~X). Any intuitive dissonance between X and Y is preserved by negating them into ~X and ~Y.