That’s fair. Depending on your stance on Moore’s Law or supercomputers, 400 trillion parameters might or might not be plausible (not really IMO). But, this is assuming that there’s no advances in the model architecture (maybe changes to the tokenizer?) which would drastically improve the performance of multiplication / other types of math.
Looking at it in a purely monetary way, a soul (being an eternal representation of you and all) would have an asymptotically infinite value, while $10 is $10. Even if there is a 0.001% chance of a soul existing (e.g. God existing, or some sort of Matrix-like scenario), then the expected value of keeping your soul would be on the same scale as: infinite⋅0.0001=infinite. Unless you view the value of a soul as non-infinite, and/or the chance that it exists as exactly 0%, it doesn’t make sense to sell it.
It would really depend on how many parameters the model has IMO, if the jump from GPT-3 to GPT-4 is something on the order of magnitude of 10-100x, then we could potentially see similar gains for multiplication. GPT-3 (175B) can do 2 digit multiplication with a ~50% accuracy, so 5-6 digits might be possible. It really depends on how well the model architecture of GPT scales in the future.
They might have been talking about the total amount of people with the potential to become better than you at the specific thing rather than the pure percentage of people who would be if everyone tried.
I’ve always been interested in meditating, and quite like it, but have never really had it click. Would you say that the ritual that you do for meditation helps solidify to your brain that you’re in the “process” of meditation in any way? Also, when you described the “opening of the hands” motion in your mind, is that a concrete thing (for lack of a better word) that you think, or more of a phase transition from the not-meditating state to the meditating state?
Some event D caused by both has been conditioned upon; new introductions have improbable attribute combinations because your friend seeks those combinations out.
This reads quite a bit like some sort of reverse-Barnum Effect, where instead of people trying to assign causality to vague descriptions / coincidences, they try to view linked (because of them) things as being coincidences.