The paper needs focus. One possibility is the technique described in the abstract “The concept of MSI is dissected..… a model is constructed...” Is there a specific formal technique that you are going to use?
Another possibility is a review of prediction techniques, with an attempt to apply each one to full AI, or references that do so. Sotala and Armstrong surveyed predicted dates to AI; you could survey the different techniques one could use or which have been used.
It seems that the section of the abstract that analyzes accelerated change (“Rate modifiers”) could be omitted as off-topic to either of the two possibilities above. Given what appears to be the main topic, I would suggest keeping the review of the AI risk short; and not going into too much detail into specific technologies like AIXI or the Goedel machine. I am not too sure about the componentry section, given that we have no idea what components might be needed.
I do need to increase the emphasis on the focus, which is the first premise you mentioned. I did not do that in this draft with the intent of eliciting feedback on the viability and interest in the model concept.
I will use formal techniques, which one(s) I have not yet settled on. At the moment, I am leaning to the processes around use case development to decompose current AI models into the componentry. For the weighting and gap calculations some statistical methods should help.
I am considering how the rate modifiers can be incorporated into the predictive model. This will help to identify what events for the community to look for and how a rate modifier occurrence in one area, e.g., pattern recognition, impacts other aspects of the model. We clearly do not know all of the components, but we do know the major disciplines that will contribute. As noted, the model will be extensible to allow discoveries to be incorporated, increasing the accuracy.
The general idea is to establish a predictive model with assumed margins of error and functionality. To put a formalized “stick in the ground” from which improvements are made. If maintained and enhanced with discoveries the margin of error will continue to decline and confidence levels will increase. Such a model also provides context for research and identifies potential areas of study.
One potential aspect of the model is to identify aspects of research that may be obviated. If a requirement is consistently satisfied through unexpected methods, it can be removed from consideration in the area where it was originally conceived. This also has the potential to provide insights to the original space.
There are some good ideas in this.
The paper needs focus. One possibility is the technique described in the abstract “The concept of MSI is dissected..… a model is constructed...” Is there a specific formal technique that you are going to use?
Another possibility is a review of prediction techniques, with an attempt to apply each one to full AI, or references that do so. Sotala and Armstrong surveyed predicted dates to AI; you could survey the different techniques one could use or which have been used.
It seems that the section of the abstract that analyzes accelerated change (“Rate modifiers”) could be omitted as off-topic to either of the two possibilities above. Given what appears to be the main topic, I would suggest keeping the review of the AI risk short; and not going into too much detail into specific technologies like AIXI or the Goedel machine. I am not too sure about the componentry section, given that we have no idea what components might be needed.
Joshua,
Thank you for the feedback.
I do need to increase the emphasis on the focus, which is the first premise you mentioned. I did not do that in this draft with the intent of eliciting feedback on the viability and interest in the model concept.
I will use formal techniques, which one(s) I have not yet settled on. At the moment, I am leaning to the processes around use case development to decompose current AI models into the componentry. For the weighting and gap calculations some statistical methods should help.
I am mulling over Bill Hibbard’s 2012 AGI papers, “Avoiding Unintended AI Behaviors” and “Decision Support for Safe AI Design” http://www.ssec.wisc.edu/~billh/g/mi.html as well as some PIBEA findings, e.g., http://www.cs.umb.edu/~jxs/pub/cec11-prospect.pdf to use as a framework for the component model. The Pareto front element is particularly interesting when considered with graph theory.
I am considering how the rate modifiers can be incorporated into the predictive model. This will help to identify what events for the community to look for and how a rate modifier occurrence in one area, e.g., pattern recognition, impacts other aspects of the model. We clearly do not know all of the components, but we do know the major disciplines that will contribute. As noted, the model will be extensible to allow discoveries to be incorporated, increasing the accuracy.
The general idea is to establish a predictive model with assumed margins of error and functionality. To put a formalized “stick in the ground” from which improvements are made. If maintained and enhanced with discoveries the margin of error will continue to decline and confidence levels will increase. Such a model also provides context for research and identifies potential areas of study.
One potential aspect of the model is to identify aspects of research that may be obviated. If a requirement is consistently satisfied through unexpected methods, it can be removed from consideration in the area where it was originally conceived. This also has the potential to provide insights to the original space.