This comment was rejected on Overwhelming Superintelligence, that I would like to know where to post for feedback:
One thing has always struck me as strange with the idea of superintelligence (or even general intelligence) emerging from Gen AI—if the best of the training material may be largely wrong and wrongly understood (with the tip of the iceberg being “Why Most Published Research Findings Are False” Ioannidis 2005) then how could any algorithm iterate over that to make equivalent sense and good decision making of some contextual situation vs those ~true human experts with their lived knowledge and wisdom of individual fields/sub-fields/sub-sub-fields/..., who may not necessarily be the most published individuals? Maybe it could work for purely logical and closed domains within mathematics and computer science, but it seems an impossibility in the nebulous real world of everything else.
This comment was rejected on Overwhelming Superintelligence, that I would like to know where to post for feedback:
One thing has always struck me as strange with the idea of superintelligence (or even general intelligence) emerging from Gen AI—if the best of the training material may be largely wrong and wrongly understood (with the tip of the iceberg being “Why Most Published Research Findings Are False” Ioannidis 2005) then how could any algorithm iterate over that to make equivalent sense and good decision making of some contextual situation vs those ~true human experts with their lived knowledge and wisdom of individual fields/sub-fields/sub-sub-fields/..., who may not necessarily be the most published individuals? Maybe it could work for purely logical and closed domains within mathematics and computer science, but it seems an impossibility in the nebulous real world of everything else.