On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn’t resemble a human mind?
I think if I had a better grasp of whether and why I think humans are (aren’t) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.
On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn’t resemble a human mind?
I’m not sure.
I think if I had a better grasp of whether and why I think humans are (aren’t) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.