Imagine casting a “speed ×100” spell on a dumb person. Would that make them a smart person? No.
On the other hand, if we would cast a “speed ×2” spell on a smart person, it would appear to make them smarter. They would be able to solve difficult problems in half the time, right?
So… there seems to be some connection, but also a difference. Speed can make you more productive, and productivity is a signal of intelligence. But if you make systematic mistakes in thinking, you will only be making them faster.
Smart people in the technology world no long believe they can think their way to success.
Because they already are thinking. If you are already thinking at near 100% of your capacity, telling you “think more” is not going to help. The right advice in that situation could be “instead of thinking without experimenting, try thinking and experimenting”. But one should give that advice only to people who are already thinking.
Imagine casting a “speed ×100” spell on a dumb person. Would that make them a smart person? No.
There’s a line in the book WYRM that made this fact click for me many years ago. The paraphrase is “A dog that can think a hundred times as fast will take a hundreth of the time to decide that it wants to sniff your crotch.”
But if you make systematic mistakes in thinking, you will only be making them faster.
But you can get away with more mistakes, if you can loop your test and improve cycle to fix those mistakes.
There was a demo that really brought this home to me. Some robotic fingers dribbling a ping pong ball at blinding speed. Fast cameras, fast actuators, brute force stupid feedback calculations. Stupid can be good enough if you’re fast enough.
For more human creative processes, speeding up the design/test/evaluate loop will often beat more genius. Many things aren’t to be reasoned out as much tested out.
I have this intuition that higher intelligence “unlocks” some options, and then it depends on the speed how much many points you get from the unlocked options. For example, if you have a ping-pong-playing robot with insane speed, such robot could easily win any ping-pong tournament. But still couldn’t conquer the world, for example. His intelligence only unlocks the area of playing ping-pong. If the intelligence is not general, making it faster still doesn’t make it general.
For general intelligences, if we ignore the time and resources, the greatest obstacle to a mind is the mind itself, its own biases. If the mind is prone to do really stupid things, giving it more power will allow it to do stupid things with greater impact. For example, if someone chooses to ignore feedback, then having more design/test/evaluate cycles available will not help.
Now let’s assume that we have an intelligence which is (a) general, and (b) willing to experiment and learn from feedback. On this level, is time and resources all that matters? Would any mind on this level, given unlimited time (immoratlity) and resources, sooner or later become a god? Or is the path full of dangerous destructive attractors? Would the mind be able to successfully navigate higher and higher levels of meta-thinking, or could a mistake at some level prevent it from ever getting higher? In other words, is “don’t ignore the feedback” the only issue to overcome, or is it just a first of many increasingly abstract issues that an increasingly powerful mind will have to deal with, where a failure to deal with any of them could “lock” the path to godhood even given unlimited time and resources? For example, imagine a mind that would be willing to consider feedback, but wouldn’t care about developing a good theory of maths and statistics. At some moment, it would be making incorrect conclusions from the feedback.
I agree that for humans, lack of time and resources is a huge issue.
The Law of the Minimum seems metaphorically relevant. “Growth is controlled not by the total amount of resources available, but by the scarcest resource.”
Intelligence, speed, time, energy, charisma, money, able-bodiedness, a like-minded community, etc.: any of these may be someone’s limiting factor.
On the other hand, sometimes one resource can trade off for another. There are a lot of examples of this in computational complexity where one can use up less memory if one is willing to use a slower algorithm or if one is willing to use more memory one can get more speed. These aren’t the only examples.
Speed x 100 would almost certain make a normal intelligence person very smart. Speed x 100 means one week for you is 2 years for them. Maybe you couldn’t beat Einstein. But imagine some common tests of intelligence such as the Putnam or a normal IQ test. People have 6 hours (in two blocks) to finish the Putnam, 600 hours is 25 days. And presumably you are not sleeping during those 25 days. If a normal person gets 3 minutes to finish a problem on a certain iq test you have 5 hours.
And a very fast dumb machine can still kill reliably. And at least in some domains being dumb very fast can find solutions creativity alone wouldn’t find either.
There’s problems you can solve quickly, and problems that you can solve at all. You want to find someone who can solve problems at a certain difficulty as fast as possible. If they can’t solve it, work on making it so they can. If they can, work on making it so they do what they already can do, but faster.
This is particularly clear with computers. You can write better algorithms that solve more problems and get better answers, at the cost of running slower. If a program can’t solve your problem, it’s worthless. If it can solve your problem, making it more sophisticated will make things worse. For example, you can’t stick formatting into a .txt file, but if you have no need for formatting, Notepad runs faster, takes less space, and is more reliable than Word.
Imagine casting a “speed ×100” spell on a dumb person. Would that make them a smart person? No.
On the other hand, if we would cast a “speed ×2” spell on a smart person, it would appear to make them smarter. They would be able to solve difficult problems in half the time, right?
So… there seems to be some connection, but also a difference. Speed can make you more productive, and productivity is a signal of intelligence. But if you make systematic mistakes in thinking, you will only be making them faster.
Because they already are thinking. If you are already thinking at near 100% of your capacity, telling you “think more” is not going to help. The right advice in that situation could be “instead of thinking without experimenting, try thinking and experimenting”. But one should give that advice only to people who are already thinking.
There’s a line in the book WYRM that made this fact click for me many years ago. The paraphrase is “A dog that can think a hundred times as fast will take a hundreth of the time to decide that it wants to sniff your crotch.”
The steelman here is a call for empiricism. Empiricism + thinking clearly are both needed. The secret is to do everything well :).
But you can get away with more mistakes, if you can loop your test and improve cycle to fix those mistakes.
There was a demo that really brought this home to me. Some robotic fingers dribbling a ping pong ball at blinding speed. Fast cameras, fast actuators, brute force stupid feedback calculations. Stupid can be good enough if you’re fast enough.
For more human creative processes, speeding up the design/test/evaluate loop will often beat more genius. Many things aren’t to be reasoned out as much tested out.
I have this intuition that higher intelligence “unlocks” some options, and then it depends on the speed how much many points you get from the unlocked options. For example, if you have a ping-pong-playing robot with insane speed, such robot could easily win any ping-pong tournament. But still couldn’t conquer the world, for example. His intelligence only unlocks the area of playing ping-pong. If the intelligence is not general, making it faster still doesn’t make it general.
For general intelligences, if we ignore the time and resources, the greatest obstacle to a mind is the mind itself, its own biases. If the mind is prone to do really stupid things, giving it more power will allow it to do stupid things with greater impact. For example, if someone chooses to ignore feedback, then having more design/test/evaluate cycles available will not help.
Now let’s assume that we have an intelligence which is (a) general, and (b) willing to experiment and learn from feedback. On this level, is time and resources all that matters? Would any mind on this level, given unlimited time (immoratlity) and resources, sooner or later become a god? Or is the path full of dangerous destructive attractors? Would the mind be able to successfully navigate higher and higher levels of meta-thinking, or could a mistake at some level prevent it from ever getting higher? In other words, is “don’t ignore the feedback” the only issue to overcome, or is it just a first of many increasingly abstract issues that an increasingly powerful mind will have to deal with, where a failure to deal with any of them could “lock” the path to godhood even given unlimited time and resources? For example, imagine a mind that would be willing to consider feedback, but wouldn’t care about developing a good theory of maths and statistics. At some moment, it would be making incorrect conclusions from the feedback.
I agree that for humans, lack of time and resources is a huge issue.
The Law of the Minimum seems metaphorically relevant. “Growth is controlled not by the total amount of resources available, but by the scarcest resource.”
Intelligence, speed, time, energy, charisma, money, able-bodiedness, a like-minded community, etc.: any of these may be someone’s limiting factor.
On the other hand, sometimes one resource can trade off for another. There are a lot of examples of this in computational complexity where one can use up less memory if one is willing to use a slower algorithm or if one is willing to use more memory one can get more speed. These aren’t the only examples.
Speed x 100 would almost certain make a normal intelligence person very smart. Speed x 100 means one week for you is 2 years for them. Maybe you couldn’t beat Einstein. But imagine some common tests of intelligence such as the Putnam or a normal IQ test. People have 6 hours (in two blocks) to finish the Putnam, 600 hours is 25 days. And presumably you are not sleeping during those 25 days. If a normal person gets 3 minutes to finish a problem on a certain iq test you have 5 hours.
Though this is the idea behind the AI in Branches on the Tree of Time.
And a very fast dumb machine can still kill reliably. And at least in some domains being dumb very fast can find solutions creativity alone wouldn’t find either.
There’s problems you can solve quickly, and problems that you can solve at all. You want to find someone who can solve problems at a certain difficulty as fast as possible. If they can’t solve it, work on making it so they can. If they can, work on making it so they do what they already can do, but faster.
This is particularly clear with computers. You can write better algorithms that solve more problems and get better answers, at the cost of running slower. If a program can’t solve your problem, it’s worthless. If it can solve your problem, making it more sophisticated will make things worse. For example, you can’t stick formatting into a .txt file, but if you have no need for formatting, Notepad runs faster, takes less space, and is more reliable than Word.