I am not saying that humans won’t try to make their machines as smart as possible, I am objecting to the idea that it is the implicit result of most AGI designs. I perceive dangerous recursive self-improvement as a natural implication of general intelligence to be as unlikely as an AGI that is automatically friendly.
Well, there’s a sense in which “most” bridges collapse, “most” ships sink and “most” planes crash.
That sense is not very useful in practice—the actual behaviour of engineered structures depends on a whole bunch of sociological considerations. If yopu want to see whether engineering projects will kill people, you have to look into those issues—because a “counting” argument tells you practically nothing of interest.
Well, there’s a sense in which “most” bridges collapse, “most” ships sink and “most” planes crash.
That sense is not very useful in practice—the actual behaviour of engineered structures depends on a whole bunch of sociological considerations. If yopu want to see whether engineering projects will kill people, you have to look into those issues—because a “counting” argument tells you practically nothing of interest.