I do need to increase the emphasis on the focus, which is the first premise you mentioned. I did not do that in this draft with the intent of eliciting feedback on the viability and interest in the model concept.
I will use formal techniques, which one(s) I have not yet settled on. At the moment, I am leaning to the processes around use case development to decompose current AI models into the componentry. For the weighting and gap calculations some statistical methods should help.
I am considering how the rate modifiers can be incorporated into the predictive model. This will help to identify what events for the community to look for and how a rate modifier occurrence in one area, e.g., pattern recognition, impacts other aspects of the model. We clearly do not know all of the components, but we do know the major disciplines that will contribute. As noted, the model will be extensible to allow discoveries to be incorporated, increasing the accuracy.
The general idea is to establish a predictive model with assumed margins of error and functionality. To put a formalized “stick in the ground” from which improvements are made. If maintained and enhanced with discoveries the margin of error will continue to decline and confidence levels will increase. Such a model also provides context for research and identifies potential areas of study.
One potential aspect of the model is to identify aspects of research that may be obviated. If a requirement is consistently satisfied through unexpected methods, it can be removed from consideration in the area where it was originally conceived. This also has the potential to provide insights to the original space.
Joshua,
Thank you for the feedback.
I do need to increase the emphasis on the focus, which is the first premise you mentioned. I did not do that in this draft with the intent of eliciting feedback on the viability and interest in the model concept.
I will use formal techniques, which one(s) I have not yet settled on. At the moment, I am leaning to the processes around use case development to decompose current AI models into the componentry. For the weighting and gap calculations some statistical methods should help.
I am mulling over Bill Hibbard’s 2012 AGI papers, “Avoiding Unintended AI Behaviors” and “Decision Support for Safe AI Design” http://www.ssec.wisc.edu/~billh/g/mi.html as well as some PIBEA findings, e.g., http://www.cs.umb.edu/~jxs/pub/cec11-prospect.pdf to use as a framework for the component model. The Pareto front element is particularly interesting when considered with graph theory.
I am considering how the rate modifiers can be incorporated into the predictive model. This will help to identify what events for the community to look for and how a rate modifier occurrence in one area, e.g., pattern recognition, impacts other aspects of the model. We clearly do not know all of the components, but we do know the major disciplines that will contribute. As noted, the model will be extensible to allow discoveries to be incorporated, increasing the accuracy.
The general idea is to establish a predictive model with assumed margins of error and functionality. To put a formalized “stick in the ground” from which improvements are made. If maintained and enhanced with discoveries the margin of error will continue to decline and confidence levels will increase. Such a model also provides context for research and identifies potential areas of study.
One potential aspect of the model is to identify aspects of research that may be obviated. If a requirement is consistently satisfied through unexpected methods, it can be removed from consideration in the area where it was originally conceived. This also has the potential to provide insights to the original space.