Oh, and the second block I talked about is relevant because it means you can’t use an algorithm like what I discussed in the first block. You can try to combine your statements with statements that have exceptionally low probability for the regions you want to “delete”, but you those statements may have odd properties elsewhere.
For example, you might issue a clarification B which scores bad regions at 1/1000000. However, B also incidentally scores a completely different region at 1000. Now you have a different problem.
You run into a few different risks here.
The people I can call “multipliers” from my above note on combining layers might multiply these together and find the incidental region more likely than your intended selection.
People who add or select the most recent region might end up selecting only the incidental region.
If you issue a high number of clarifications, you might end up in 1 of 2 bad attractors:
A smooth probability-space, in which every outcome looks basically as likely. Here, no one has any clue what you are saying.
A really bumpy probability space, in which listeners don’t know which idea you’re pointing at out of many exclusive possibilities.
Oh, and the second block I talked about is relevant because it means you can’t use an algorithm like what I discussed in the first block. You can try to combine your statements with statements that have exceptionally low probability for the regions you want to “delete”, but you those statements may have odd properties elsewhere.
For example, you might issue a clarification B which scores bad regions at 1/1000000. However, B also incidentally scores a completely different region at 1000. Now you have a different problem.
You run into a few different risks here.
The people I can call “multipliers” from my above note on combining layers might multiply these together and find the incidental region more likely than your intended selection.
People who add or select the most recent region might end up selecting only the incidental region.
If you issue a high number of clarifications, you might end up in 1 of 2 bad attractors:
A smooth probability-space, in which every outcome looks basically as likely. Here, no one has any clue what you are saying.
A really bumpy probability space, in which listeners don’t know which idea you’re pointing at out of many exclusive possibilities.