Using senses you interact and obtain information about molds, and separately about hallucinations. Then you eat potatoe containing mold producing hallucinogenic toxins and experience a fake sensory stimulation, you can produce the abstract reasoning that this sensory stimuli is not real but is instead a hallucination.
Or in other words I believe it’s logically impossible for any AI to self-improve without large amount of input data and/or hardcoded mechanics. Operating with this “external” data you can proove abstract reasoning to a limited degree, and with the proven abstract reasoning, fix the hardcoded data.
Instead of working against the problem, why not work with it?
Using senses you interact and obtain information about molds, and separately about hallucinations. Then you eat potatoe containing mold producing hallucinogenic toxins and experience a fake sensory stimulation, you can produce the abstract reasoning that this sensory stimuli is not real but is instead a hallucination.
Or in other words I believe it’s logically impossible for any AI to self-improve without large amount of input data and/or hardcoded mechanics. Operating with this “external” data you can proove abstract reasoning to a limited degree, and with the proven abstract reasoning, fix the hardcoded data.
Instead of working against the problem, why not work with it?