I believe I’m familiar with how you use the term rationality. I believe it’s compatible with (mutually reinforcing with) communicative rationality for the most part, though I believe there are some differences between Habermas’ and Yudkowsky’s epistemologies. I brought up communicative rationality because (a) I think it’s an important concept that is in some ways an advance in how to think about rationality and, (b) I wanted to disclose some of my own predispositions and values for the sake of establishing expectations.
Thanks for the link to the Hanson-Yudkowsky debate. From perusing the summary and a few of the posts by the debaters, I guess I’d say I find Hanson’s counterarguments largely compelling. I’d also respond with two other points (mostly hoping you will direct me to where they’ve already been discussed):
Since the computational complexity of so many kinds of problems has been proven to be within certain complexity classes, recursive improvement in algorithms alone is likely to hit asymptotic walls for a lot of interesting domains. So, self-modifying AI alone, without taking resources into account, seems unlikely (maybe provably impossible) to be a big threat.
That said, since there already are self-modifying intelligent organizations that are taking over the world (or trying to, facing competition from each other), what’s gone into Singularity research definitely isn’t useless. Rather, it’s directly applicable to what’s happening right now.
I agree very strongly with the thrust of what IlyaShpitser’s been saying.
If it is provably impossible, I would feel much better with a proof; this seems like a reasonable goal for SingIst, to look at proofs of computational complexity and upper limits on computer power, and get an upper limit on the optimization power of an AI (perhaps a few estimates conditional on some problems being in different categories or new best algorithms being found); then to come up with some reasonable way of measuring lower and upper bounds on the optimization power of various organizations (at least a generous upper bound on all existing organizations and a lower bound on some big ones like the US government).
I would be EXTREMELY surprised to find that a lower bound on organizations was higher than the upper bound on AI, but if so it would be good to know already, and if not the research would probably be worth doing anyway and a good showcase of the actual extent of the problem.
Thanks for this response, Luke.
I don’t want to argue about definitions either.
I believe I’m familiar with how you use the term rationality. I believe it’s compatible with (mutually reinforcing with) communicative rationality for the most part, though I believe there are some differences between Habermas’ and Yudkowsky’s epistemologies. I brought up communicative rationality because (a) I think it’s an important concept that is in some ways an advance in how to think about rationality and, (b) I wanted to disclose some of my own predispositions and values for the sake of establishing expectations.
Thanks for the link to the Hanson-Yudkowsky debate. From perusing the summary and a few of the posts by the debaters, I guess I’d say I find Hanson’s counterarguments largely compelling. I’d also respond with two other points (mostly hoping you will direct me to where they’ve already been discussed):
Since the computational complexity of so many kinds of problems has been proven to be within certain complexity classes, recursive improvement in algorithms alone is likely to hit asymptotic walls for a lot of interesting domains. So, self-modifying AI alone, without taking resources into account, seems unlikely (maybe provably impossible) to be a big threat.
That said, since there already are self-modifying intelligent organizations that are taking over the world (or trying to, facing competition from each other), what’s gone into Singularity research definitely isn’t useless. Rather, it’s directly applicable to what’s happening right now.
I agree very strongly with the thrust of what IlyaShpitser’s been saying.
If it is provably impossible, I would feel much better with a proof; this seems like a reasonable goal for SingIst, to look at proofs of computational complexity and upper limits on computer power, and get an upper limit on the optimization power of an AI (perhaps a few estimates conditional on some problems being in different categories or new best algorithms being found); then to come up with some reasonable way of measuring lower and upper bounds on the optimization power of various organizations (at least a generous upper bound on all existing organizations and a lower bound on some big ones like the US government).
I would be EXTREMELY surprised to find that a lower bound on organizations was higher than the upper bound on AI, but if so it would be good to know already, and if not the research would probably be worth doing anyway and a good showcase of the actual extent of the problem.