I gave a talk at FHI ages ago on how to use causal graphs to solve Newcomb type problems. It wasn’t even an original idea: Spohn had something similar in 2012.
I don’t think any of this stuff is interesting, or relevant for AI safety. There’s a pretty big literature on model robustness and algorithmic fairness that uses causal ideas.
If you want to worry about the end of the world, we have climate change, pandemics, and the rise of fascism.
Why did you give a talk on causal graphs if you didn’t think this kind of work was interesting or relevant? Maybe I’m misunderstanding what you’re saying isn’t interesting or relevant.
I gave a talk at FHI ages ago on how to use causal graphs to solve Newcomb type problems. It wasn’t even an original idea: Spohn had something similar in 2012.
I don’t think any of this stuff is interesting, or relevant for AI safety. There’s a pretty big literature on model robustness and algorithmic fairness that uses causal ideas.
If you want to worry about the end of the world, we have climate change, pandemics, and the rise of fascism.
Why did you give a talk on causal graphs if you didn’t think this kind of work was interesting or relevant? Maybe I’m misunderstanding what you’re saying isn’t interesting or relevant.