Has anyone else noticed this paper is much clearer on definitions and much more readable than the vast majority of AI safety literature, much of what it draws on? Like it has a lot of definitions that could be put in an “encyclopedia for friendly AI” so to speak.
Some extra questions:
How much time/effort did it take for you to write this all? What was the hardest part of this?
Do most systems today unintentionally have corrigibility simply b/c they are not complex enough to represent “being turned off” as a strong negative in its reward function?
Are Newcombian problems rarely found in the real world, but much more likely to be found in the AI world (esp b/c the AI has a modeler that should model what they would do?)
It’s really nice to hear that the paper seems clear! Thanks for the comment.
I’ve been working on this since March, but at a very slow pace, and I took a few hiatuses. most days when I’d work on it, it was for less than an hour. After coming up with the initial framework to tie things together, the hardest part was trying and failing to think of interesting ways in which most of the achilles heels presented could be used as novel containment measures. I discuss this a bit in the discussion section.
For 2-3, I can give some thoughts, but these aren’t necessarily through through much more than many other people one could ask.
I would agree with this. From an agent to even have a notion of being turned off, it would need some sort or model that accounts for this but which isn’t learned via experience in a typical episodic learning setting (clearly because you can’t learn after you’re dead). This would all require a world model which would be more sophisticated than any sort of model-based RL techniques of which I know would be capable of by default.
I also would agree. The most straightforward way for these problems to emerge is if a predictor has access to source code. Though sometimes they can occur if the predictor has access to some other means of prediction which cannot be confounded by the choice of what source code the agent runs. I write a little about this in this post. https://www.lesswrong.com/posts/xoQRz8tBvsznMXTkt/dissolving-confusion-around-functional-decision-theory
Has anyone else noticed this paper is much clearer on definitions and much more readable than the vast majority of AI safety literature, much of what it draws on? Like it has a lot of definitions that could be put in an “encyclopedia for friendly AI” so to speak.
Some extra questions:
How much time/effort did it take for you to write this all? What was the hardest part of this?
Do most systems today unintentionally have corrigibility simply b/c they are not complex enough to represent “being turned off” as a strong negative in its reward function?
Are Newcombian problems rarely found in the real world, but much more likely to be found in the AI world (esp b/c the AI has a modeler that should model what they would do?)
It’s really nice to hear that the paper seems clear! Thanks for the comment.
I’ve been working on this since March, but at a very slow pace, and I took a few hiatuses. most days when I’d work on it, it was for less than an hour. After coming up with the initial framework to tie things together, the hardest part was trying and failing to think of interesting ways in which most of the achilles heels presented could be used as novel containment measures. I discuss this a bit in the discussion section.
For 2-3, I can give some thoughts, but these aren’t necessarily through through much more than many other people one could ask.
I would agree with this. From an agent to even have a notion of being turned off, it would need some sort or model that accounts for this but which isn’t learned via experience in a typical episodic learning setting (clearly because you can’t learn after you’re dead). This would all require a world model which would be more sophisticated than any sort of model-based RL techniques of which I know would be capable of by default.
I also would agree. The most straightforward way for these problems to emerge is if a predictor has access to source code. Though sometimes they can occur if the predictor has access to some other means of prediction which cannot be confounded by the choice of what source code the agent runs. I write a little about this in this post. https://www.lesswrong.com/posts/xoQRz8tBvsznMXTkt/dissolving-confusion-around-functional-decision-theory