then corrections are spaghetti coded added on to prevent particular failures with data from real experiments
My guess would be that the failures would be quite systematic, and would reflect the absence of substantial algorithms. That would suggest that you either have to come up with more algorithms, and/or you have to learn them from data. But to learn them from data without coming up with the algorithms or with algorithmic search spaces that sufficiently promote the relevant pieces, you need a lot of data; and brain algorithms that work on a time scale of an hour or a day have correspondingly 104 or 105 less data feasibly available compared to ~second-long events.
My guess would be that the failures would be quite systematic, and would reflect the absence of substantial algorithms. That would suggest that you either have to come up with more algorithms, and/or you have to learn them from data. But to learn them from data without coming up with the algorithms or with algorithmic search spaces that sufficiently promote the relevant pieces, you need a lot of data; and brain algorithms that work on a time scale of an hour or a day have correspondingly 104 or 105 less data feasibly available compared to ~second-long events.