If you think it’s a hard bet to win, you are saying you agree that nothing bad will happen. So why worry?
IlyaShpitser
Wanna bet some money that nothing bad will come of any of this on the timescales you are worried about?
Big fan of Galeev.
Some reading on this:
https://csss.uw.edu/files/working-papers/2013/wp128.pdf
http://proceedings.mlr.press/v89/malinsky19b/malinsky19b.pdf
https://arxiv.org/pdf/2008.06017.pdf
—
From my experience it pays to learn how to think about causal inference like Pearl (graphs, structural equations), and also how to think about causal inference like Rubin (random variables, missing data). Some insights only arise from a synthesis of those two views.
Pearl is a giant in the field, but it is worth remembering that he’s unusual in another way (compared to a typical causal inference researcher) -- he generally doesn’t worry about actually analyzing data.
---
By the way, Gauss figured out not only the normal distribution trying to track down Ceres’ orbit, he actually developed the least squares method, too! So arguably the entire loss minimization framework in machine learning came about from thinking about celestial bodies.
Classical RL isn’t causal, because there’s no confounding (although I think it is very useful to think about classical RL causally, for doing inference more efficiently).
Various extensions of classical RL are causal, of course.
A lot of interesting algorithmic fairness isn’t really causal. Classical prediction problems aren’t causal.
However, I think domain adaptation, covariate shift, semi-supervised learning are all causal problems.
---
I think predicting things you have no data on (“what if the AI does something we didn’t foresee”) is sort of an impossible problem via tools in “data science.” You have no data!
A few comments:
(a) I think “causal representation learning” is too vague, this overview (https://arxiv.org/pdf/2102.11107.pdf) talks about a lot of different problems I would consider fairly unrelated under this same heading.
(b) I would try to read “classical causal inference” stuff. There is a lot of reinventing of the wheel (often, badly) happening in the causal ML space.
(c) What makes a thing “causal” is a distinction between a “larger” distribution we are interested in, and a “smaller” distribution we have data on. Lots of problems might look “causal” but really aren’t (in an interesting way) if formalized properly.
Please tell Victor I said hi, if you get a chance :).
I gave a talk at FHI ages ago on how to use causal graphs to solve Newcomb type problems. It wasn’t even an original idea: Spohn had something similar in 2012.
I don’t think any of this stuff is interesting, or relevant for AI safety. There’s a pretty big literature on model robustness and algorithmic fairness that uses causal ideas.
If you want to worry about the end of the world, we have climate change, pandemics, and the rise of fascism.
Counterfactuals (in the potential outcome sense used in statistics) and Pearl’s structural equation causality semantics are equivalent.
Could you do readers an enormous favor and put references in when you say stuff like this:
”Vitamin D and Zinc, and if possible Fluvoxamine, are worth it if you get infected, also Vitamin D is worth taking now anyway (I take 5k IUs/day).”
“MIRI/CFAR is not a cult.”
What does being a cult space monkey feel like from the inside?
This entire depressing thread is reminding me a little of how long it took folks who watch Rick and Morty to realize Rick is an awful abusive person, because he’s the show’s main character, and isn’t “coded” as a villain.
+1 to all this.
I am not going to waste my time arguing against formalism. When it comes to things like formalism I am going to follow in my grandfather’s footsteps, if it comes time to “have an argument” about it.
- 22 Oct 2021 7:23 UTC; 25 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
What Cummings is proposing is formalism with a thin veneer of silicon valley jargon, like “startups” or whatever, designed to be palatable to people like the ones who frequent this website.
He couldn’t be clearer, re: where his influences are coming from, he cites them at the end. It’s Moldbug, and Siskind (Siskind’s email leaks show what his real opinions are, he’s just being a bit coy).
The proposed system is not going to be more democratic, it is going to be more formalist.
Fascism is bad, Christian.
- 22 Oct 2021 19:07 UTC; 28 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
- 22 Oct 2021 7:23 UTC; 25 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
My response is we have fancy computers and lots of storage—there’s no need to do psychometric models of the brain with one parameter anymore, we can leave that to the poor folks in the early 1900s.
How many parameters does a good model of the game of Go have, again? The human brain is a lot more complicated, still.
There are lots of ways to show single parameter models are silly, for example discussions of whether Trump is “stupid” or not that keep going around in circles.
“Well, suppose that factor analysis was a perfect model. Would that mean that we’re all born with some single number g that determines how good we are at thinking?”
″Determines” is a causal word. Factor analysis will not determine causality for you.
I agree with your conclusion, though, g is not a real thing that exists.
Should be doing stuff like this, if you want to understand effects of masks:
https://arxiv.org/pdf/2103.04472.pdf
https://auai.org/uai2021/pdf/uai2021.89.preliminary.pdf (this really is preliminary, e.g. they have not yet uploaded a newer version that incorporates peer review suggestions).
---
Can’t do stuff in the second paper without worrying about stuff in the first (unless your model is very simple).
For the benefit of other readers: this post is confused.
Specifically on this (although possibly also on other stuff): (a) causal and statistical DAGs are fundamentally not the same kind of object, and (b) no practical decision theory used by anyone includes the agent inside the DAG in the way this post describes.
---
”So if the EDT agent can find a causal structure that reflects their (statistical) beliefs about the world, then they will end up making the same decision as a CDT agent who believes in the same causal structure.”
A → B → C and A ← B ← C reflect the same statistical beliefs about the world.