On accepting an argument if you have limited computational power.

It would seem rational to accept any argument that is not fallacious; but this leads to consideration of problems such as Pascal’s mugging and other exploits.

I’ve had a realization of a subconscious triviality: for me to accept an argument as true, it is not enough that I find no error in it. The argument must also be so structured that I would expect to have found an error if it was invalid (or I myself must make such structured version first). That’s how mathematical proofs work—they are so structured that finding an error requires little computational power (only knowledge of rules and reliability); in the extreme case an entirely unintelligent machine can check a proof.

In light of this I propose that those who want to make a persuasive argument should try to structure the argument so it’d be easy to find flaws in it. This also goes for the thought experiments and hypothetical situations. Those seem rather often to be constructed with entirely opposite goal in mind—to obstruct the verification process or to try to prevent the reader from trying to find flaws.

Something else tangentially related to the arguments. The faulty models are the prime cause of decision errors; yet the faulty models are the staple of thought experiment; nobody raises an eyebrow as all models are ultimately imperfect.

However, to accept an argument based on imperfect model one must be capable of correctly propagating the error and estimating the error in the final conclusion, as a faulty model may be so constructed as to itself differ non substantially from the reality but in such a way that the difference diverges massively along the chain of reasoning. My example of this is the Trolley Problems. The faults of original model are nothing out of ordinary; simplified assumptions of the real world, perfect information, etc. Normally you can have those faults in model and still arrive at reasonably close outcome. The end result is throwing of fat people onto tracks, cutting up of travellers for organs, and similar behaviours which we intuitively know we could live a fair lot better without. How that happens? In real world the strongly asymmetrical relations of form ‘death of 1 person saves 10 people’ are very rare (as an emergent property of complexity of the real world that is lacking in the imaginary worlds of trolley problems), while the decision errors are not nearly so rare, so most of people killed to save others would end up killed in vain.

I don’t know how models can be structured as to facilitate propagation of model’s error. But it seems to be necessary for arguments based on models to be convincing.