Metaphorically, the Biointelligence Explosion represents an “event horizon” beyond which humans cannot model or understand the future.
As usual, the idea that we cannot model or understand the future is bunk. Wolfram goes on about this too—with his “computational irreducibility”. Popper had much the same idea—in “The Poverty of Historicism”. What is it about the unknowable future that makes it seem so attractive?
What is it about the unknowable future that makes it seem so attractive?
There are a variety of different issues going on here. One is that there’s a large history of very inaccurate predictions about the future, so they are reacting against that. Another is that predicting about the future with any accuracy is really hard. If the thesis were restricted to “predicting the future is so difficult that the vast majority of it is a waste of time” then it would look more reasonable. I suspect that when some people make this sort of assertion they mean something closer to this.
If the thesis were restricted to “predicting the future is so difficult that the vast majority of it is a waste of time” then it would look more reasonable.
Well, the brain is constantly predicting the future. It has to understand the future consequences of its possible actions—so that it can choose between them. Prediction is the foundation of all decision making. Predicting the future seems rather fundamental and commonplace to me—and I would not normally call it “a waste of time”.
Ok. How about “”predicting the future to any substantial level beyond the next few years is so difficult that the vast majority of it is a waste of time.”
(I disagree with both versions of this thesis, but this seems more reasonable. Therefore, it seems likely to me that people mean something much closer to this.)
Also note the conflation between two types of singularity even though only one type (intelligence explosion) is in the name! Isn’t the reason one would use the term “intelligence explosion” to distinguish your view from the event horizon one?
As usual, the idea that we cannot model or understand the future is bunk. Wolfram goes on about this too—with his “computational irreducibility”. Popper had much the same idea—in “The Poverty of Historicism”. What is it about the unknowable future that makes it seem so attractive?
There are a variety of different issues going on here. One is that there’s a large history of very inaccurate predictions about the future, so they are reacting against that. Another is that predicting about the future with any accuracy is really hard. If the thesis were restricted to “predicting the future is so difficult that the vast majority of it is a waste of time” then it would look more reasonable. I suspect that when some people make this sort of assertion they mean something closer to this.
Well, the brain is constantly predicting the future. It has to understand the future consequences of its possible actions—so that it can choose between them. Prediction is the foundation of all decision making. Predicting the future seems rather fundamental and commonplace to me—and I would not normally call it “a waste of time”.
Ok. How about “”predicting the future to any substantial level beyond the next few years is so difficult that the vast majority of it is a waste of time.”
(I disagree with both versions of this thesis, but this seems more reasonable. Therefore, it seems likely to me that people mean something much closer to this.)
Also note the conflation between two types of singularity even though only one type (intelligence explosion) is in the name! Isn’t the reason one would use the term “intelligence explosion” to distinguish your view from the event horizon one?
It is best not to use the term “intelligence explosion” for some hypothetical future event in the first place. That is severely messed up terminology.