That’s what I thought when reading this, the opposite of double (100% gain) is half (50%) loss, so if you’ve lost by more than 50% on the negative coin toss result you will tend to lose money over time.
If the negative was less than half (say, 40%), you would gain money over time.
Speed of task execution is a separate development vector from Artificial Super Intelligence (ASI). Using the calculator as an example, being able to compute something a million times faster than a human doesn’t mean it’s any smarter.
I thought that the risk of ASI is that it would outsmart us (humans) by doing things that we can’t comprehend or, if nefariously incentivised, finding vulnerabilities in our systems that we are not smart enough to predict.
Simply doing things that a human can do, but faster, is not ASI, unless I’m missing something?
I’m personally not convinced that the recent AI boom, which has mostly centred around LLMs (ChatGPT etc) has had much impact on the development of ASI. Are LLMs able to formulate more intelligent insights than the data on which they were trained? I.e. within the text format, this is data that has all already been filtered through a human brain.
I would expect that a super intelligence would require direct access with the real world, not information that has been passed through a human filter. This may be achievable by training models on video and audio data, which is a more direct feed of the real world, but I would guess that giving an AI arms and legs etc, that allow it to interact with the real world to experiment with things, would make it learn much quicker.