It will depend on how much much high-quality data you need to train the reporter. Probably it’s a small fraction of the data you need to train the predictor, and so for generating each reporter datapoint you can afford to use many times more data than the predictor usually uses. I often imagine the helpers having 10-100x more computation time.
Question: what’s the relative amount of compute you are imagining SmartVault and the helper AI having? Both the same, or one having a lot more?
It will depend on how much much high-quality data you need to train the reporter. Probably it’s a small fraction of the data you need to train the predictor, and so for generating each reporter datapoint you can afford to use many times more data than the predictor usually uses. I often imagine the helpers having 10-100x more computation time.