it needs to plug into the mathematical formalizations one would use to do the social science form of this.
Could you clarify what you mean with a “social science form” of a mathematical formalisation? I’m not familiar with this.
they’re right to look at people funny even if they have the systems programming experience or what have you.
It was expected and understandable that people look funny at the writings from a multi-skilled researcher with new ideas that those people were not yet familiar with. Let’s move on from first impressions.
simulation
If with simulation, we can refer to a model that is computed to estimate a factor on which further logical deduction steps are based on, that would connect up with Forrest’s work (it’s not really about multi-agent simulation though).
Based on what I learned from Forrest, we need to distinguish the ‘estimation’ factors from the ‘logical entailment’ factors. That the notion of “proof” is only with respect to that which can be logically entailed. Everything else is about assessment. In each case, we need to be sure we are doing the modelling correctly.
For example, it could be argued that step ‘b’ below is about logical entailment, though according to Forrest most would argue that it is an assessment. Given that it depends on both physics and logic (via comp-sci modeling), it depends on how one regards the notion of ‘observation’, and where that is empirical or analytic observation.
- b; If AGI/APS is permitted to continue to exist, then it will inevitably, inexorably, implement and manifest certain convergent behaviors.
- c; that among these inherent convergent behaviors will be at least all of:.
− 1; to/towards self existence continuance promotion.
− 2; to/towards capability building capability, a increase seeking capability, a capability of seeking increase, capability/power/influence increase, etc.
− 3; to/towards shifting ambient environmental conditions/context to/towards favoring the production of (variants of, increases of) its artificial substrate matrix.
Note again: the above is not formal reasoning. It is a super-short description of what two formal reasoning steps would cover.
but if we can take the type signature from a simulation, then we can attempt to do formal reasoning about its possibility space given the concrete example. if we don’t have precise types, we can’t reason through these systems. b seems to me to be a falsifiable claim that cannot be determined true or false from pure rational computation, it requires active investigation. we have evidence of it, but that evidence needs to be cited.
Could you clarify what you mean with a “social science form” of a mathematical formalisation?
I’m not familiar with this.
It was expected and understandable that people look funny at the writings from a multi-skilled researcher with new ideas that those people were not yet familiar with.
Let’s move on from first impressions.
If with simulation, we can refer to a model that is computed to estimate a factor on which further logical deduction steps are based on, that would connect up with Forrest’s work (it’s not really about multi-agent simulation though).
Based on what I learned from Forrest, we need to distinguish the ‘estimation’ factors from the ‘logical entailment’ factors. That the notion of “proof” is only with respect to that which can be logically entailed. Everything else is about assessment. In each case, we need to be sure we are doing the modelling correctly.
For example, it could be argued that step ‘b’ below is about logical entailment, though according to Forrest most would argue that it is an assessment. Given that it depends on both physics and logic (via comp-sci modeling), it depends on how one regards the notion of ‘observation’, and where that is empirical or analytic observation.
Note again: the above is not formal reasoning. It is a super-short description of what two formal reasoning steps would cover.
but if we can take the type signature from a simulation, then we can attempt to do formal reasoning about its possibility space given the concrete example. if we don’t have precise types, we can’t reason through these systems. b seems to me to be a falsifiable claim that cannot be determined true or false from pure rational computation, it requires active investigation. we have evidence of it, but that evidence needs to be cited.
How does your approach compare with https://www.metaethical.ai/?