If the advantage is speed, then in one year an AI that thinks 10,000x faster could be as productive as a person who lives for 10,000 years. Something like that. Or as productive as one year each from 10,000 people. But a person could live to 10,000 and not be very productive, ever. That’s easy, right? Because they get stuck, unhappy, bored, superstitious … all kinds of things can go wrong with their thinking. If AGI only has a speed advantage, that won’t make it immune to dishonesty, wishful thinking, etc. Right?
Humans have fast access to facts via google, databases, and other tools, so memorizing isn’t crucial.
The entire point of the sequences is to list dozens of ways that the human brain reliably fails.
I thought they talked about things like biases. Couldn’t an AGI be biased, too?
No, not quite. It’s more like “let us poke around this NN and we’ll be able to craft inputs which look like one thing to a human and a completely different thing to the NN, and the NN is very sure of it”.
If the advantage is speed, then in one year an AI that thinks 10,000x faster could be as productive as a person who lives for 10,000 years. Something like that. Or as productive as one year each from 10,000 people. But a person could live to 10,000 and not be very productive, ever. That’s easy, right? Because they get stuck, unhappy, bored, superstitious … all kinds of things can go wrong with their thinking. If AGI only has a speed advantage, that won’t make it immune to dishonesty, wishful thinking, etc. Right?
Humans have fast access to facts via google, databases, and other tools, so memorizing isn’t crucial.
I thought they talked about things like biases. Couldn’t an AGI be biased, too?
For fun ways in which NN classifiers reliably fail, google up adversarial inputs :-)
Example
Rubbish in, rubbish out—right?
No, not quite. It’s more like “let us poke around this NN and we’ll be able to craft inputs which look like one thing to a human and a completely different thing to the NN, and the NN is very sure of it”.