Paul: Sounds like you’re just describing the “thought faster” part of the CEV process, i.e., “What would you decide if you could search a larger argument space for reasons?” However, it seems to me that you’re idealizing this process very highly, and overlooking such questions as “What if different orderings of the arguments would end up convincing us of different things?” which a CEV has to handle somehow, e.g. by weighting the possibilities by e.g. length, combining them into a common superposition, and acting only where strong coherence exists… but now we’re heading into strictly Friendly AI territory.
If you say “reasons” or “reasons for reasons”, that’s philosophy written by humans for humans; if you want to put the weight of your theory on “reasons” you have to tell me how to compute a “reason”, or how to make a superintelligence compute something that computes a reason.
Paul: Sounds like you’re just describing the “thought faster” part of the CEV process, i.e., “What would you decide if you could search a larger argument space for reasons?” However, it seems to me that you’re idealizing this process very highly, and overlooking such questions as “What if different orderings of the arguments would end up convincing us of different things?” which a CEV has to handle somehow, e.g. by weighting the possibilities by e.g. length, combining them into a common superposition, and acting only where strong coherence exists… but now we’re heading into strictly Friendly AI territory.
If you say “reasons” or “reasons for reasons”, that’s philosophy written by humans for humans; if you want to put the weight of your theory on “reasons” you have to tell me how to compute a “reason”, or how to make a superintelligence compute something that computes a reason.