lower the learning rate when the sim is less confident the real world estimation is correct
Adversarial examples can make an image classifier be confidently wrong.
Because it’s what humans want AI for, and due to the relationships between the variables, it is possible we will not ever get uncontrollable superintelligence before first building a lot of robots, ICs, collecting revenue, and so on.
You are talking about robots, and a fairly specific narrow “take the screws out” AI.
Quite a few humans seem to want AI for generating anime waifus. And that is also a fairly narrow kind of AI.
Your “log(compute)” term came from a comparison which was just taking more samples. This doesn’t sound like an efficient way to use more compute.
Someone, using a pretty crude algorithmic approach, managed to get a little more performance for a lot more compute.
Adversarial examples can make an image classifier be confidently wrong.
You are talking about robots, and a fairly specific narrow “take the screws out” AI.
Quite a few humans seem to want AI for generating anime waifus. And that is also a fairly narrow kind of AI.
Your “log(compute)” term came from a comparison which was just taking more samples. This doesn’t sound like an efficient way to use more compute.
Someone, using a pretty crude algorithmic approach, managed to get a little more performance for a lot more compute.