I strongly endorse this line of thinking. The current moment is strongly confirming my position that we should invest more dedicated effort to finding the simplest, easiest to digest ways to communicate all these details.
I don’t agree with “simplest”, I think that simpleness might often be instrumental for “easy to digest” but isn’t helpful in and of itself. You can’t, and shouldn’t, treat writing like computer code, where you simplify it to as few words as possible, while keeping the logic theoretically intact.
When you look at Yudkowsky’s writing, you often see him explaining something 2-4 different ways instead of just once. It’s extremely helpful to give tons of unique examples of a concept, that sets up the reader to integrate the concept into their mind in multiple different ways, which prepares them to operationalize the concept for real in the real world.
It appears we are working with different intuitions about simplicity. I claim that simplicity is helpful in and of itself, but is also orthogonal to correctness, so we do run the risk of simplifying into uselessness.
I don’t associate simple with short, instinctively. While short is also a goal, I claim that an explanation that relies on multiple concrete examples is a much simpler explanation than a dense, abstract explanation even if the latter is shorter.
What I want from simplicity is things like:
Direct: as few inferential steps as possible, with the ideal being 0.
Atomic: I don’t want the simple explanation to rely on other pre-existing concepts.
Plain: avoid stuff like memes from the rationalsphere (even useful ones, like names for concepts).
The reason I want simplicity, by which I mean things like the above, is because there is an urgent need to reduce the attention and effort thresholds for understanding that there is even an issue to be considered. This is especially true if we do not reject Eliezer’s AI ban treaty proposal out of hand, because that requires congress people, their staffers, and midlevel functionaries in the State Department being able to wedge the AI doom arguments into their heads while lacking any prior motivation to know about them.
Another way to say what I want is distillation of the AI doom perspective. Well-distilled ideas have the trait of being simpler than undistilled ones, and while we don’t have enough causal information to match well-distilled scientific theories I think we should have enough to produce simpler correct arguments for why doom specifically is an issue.
I agree wholeheartedly. The current situation with AI safety seems to be in a very unusual state; AI risk is a very simple and easy thing to understand, and yet so few people succeed at explaining it properly.
People even end up becoming afraid to even try explaining it to someone for the first time, for fear of giving a bad first impression. If there was a clear-cut path do doing it right, everything could be different.
I strongly endorse this line of thinking. The current moment is strongly confirming my position that we should invest more dedicated effort to finding the simplest, easiest to digest ways to communicate all these details.
I don’t agree with “simplest”, I think that simpleness might often be instrumental for “easy to digest” but isn’t helpful in and of itself. You can’t, and shouldn’t, treat writing like computer code, where you simplify it to as few words as possible, while keeping the logic theoretically intact.
When you look at Yudkowsky’s writing, you often see him explaining something 2-4 different ways instead of just once. It’s extremely helpful to give tons of unique examples of a concept, that sets up the reader to integrate the concept into their mind in multiple different ways, which prepares them to operationalize the concept for real in the real world.
Upvoted.
It appears we are working with different intuitions about simplicity. I claim that simplicity is helpful in and of itself, but is also orthogonal to correctness, so we do run the risk of simplifying into uselessness.
I don’t associate simple with short, instinctively. While short is also a goal, I claim that an explanation that relies on multiple concrete examples is a much simpler explanation than a dense, abstract explanation even if the latter is shorter.
What I want from simplicity is things like:
Direct: as few inferential steps as possible, with the ideal being 0.
Atomic: I don’t want the simple explanation to rely on other pre-existing concepts.
Plain: avoid stuff like memes from the rationalsphere (even useful ones, like names for concepts).
The reason I want simplicity, by which I mean things like the above, is because there is an urgent need to reduce the attention and effort thresholds for understanding that there is even an issue to be considered. This is especially true if we do not reject Eliezer’s AI ban treaty proposal out of hand, because that requires congress people, their staffers, and midlevel functionaries in the State Department being able to wedge the AI doom arguments into their heads while lacking any prior motivation to know about them.
Another way to say what I want is distillation of the AI doom perspective. Well-distilled ideas have the trait of being simpler than undistilled ones, and while we don’t have enough causal information to match well-distilled scientific theories I think we should have enough to produce simpler correct arguments for why doom specifically is an issue.
I agree wholeheartedly. The current situation with AI safety seems to be in a very unusual state; AI risk is a very simple and easy thing to understand, and yet so few people succeed at explaining it properly.
People even end up becoming afraid to even try explaining it to someone for the first time, for fear of giving a bad first impression. If there was a clear-cut path do doing it right, everything could be different.