If your premise is an expected utility-maximizer capable of undergoing explosive recursive self-improvement that tries to take every goal to its logical extreme, whether that is part of the specifications or not, then you already answered your own question and arguing about drives becomes completely useless.
I like your point, but I wonder what doubts you have about the premise. Is an expected-utility-maximizer likely to be absurdly difficult to construct, or do you think all or almost all AI designers would prefer other designs? I think that AI designers would prefer such a design if they could have it, and “maximize my company’s profits” is likely to be the design objective.
I think that most researchers are not interested in fully autonomous AI (AI with persistent goals and a “self”) and more interested in human-augmented intelligence (meaning tools like data-mining software).
I do think that an expected utility-maximizer is the ideal in GAI. But, just like general purpose quantum computers, I believe that expected utility-maximizer’s that − 1) find it instrumentally useful to undergo recursive self-improvement 2) find it instrumentally useful to take over the planet/universe to protect their goals—are, if at all feasible, the end-product of a long chain of previous AI designs with no quantum leaps in-between. That they are at all feasible is dependent on 1) how far from the human level intelligence hits diminishing returns 2) that intelligence is more useful than other kinds of resources in stumbling upon unknown unknowns in solution space 3) that expected utility-maximizer’s and their drives are not fundamentally dependent on the precision with which their utility-function is defined.
I further believe that long before we get to the point of discovering how to build expected utility-maximizer’s, capable of undergoing explosive recursive self-improvement, we will have automatic scientists that can brute-force discoveries on hard problem in bio and nanotech and enable unfriendly humans to wreck havoc and control large groups of people. If we survive that, which I think is the top risk rather than GAI, then we might at some point be able to come up with an universal artificial intelligence. (ETA: Note that for automatic scientists to work well the goals need to be well-defined, which isn’t the case for intelligence amplification.)
I just don’t have enough background knowledge to conclude that it is likely that humans can stumble upon simple algorithms that could be improved to self-improve and then reach vastly superhuman capabilities. From my point of view that seems like pure speculation, although speculation that should be taken seriously and that does legitimate the existence of an organisation like SI. Which is the reason why I have donated a few times already. But from my uneducated point of view it seems unreasonable to claim that the possibility is obviously correct and that the arguments might not simply sound convincing.
I approach this problem the same way that I approach climate change. Just because one smart person believes that climate change is bunk I don’t believe it as well. All his achievements do not legitimate his views. And since I am yet too uneducated and do not have the time to evaluate all the data and calculations I am using the absurdity heuristic in combination with an appeal to authority to conclude that climate change is real. And the same goes for risks from AI. I can hardly evaluate universal AI research or understand approximation’s to AIXI. But if the very people who came up with it disagree on various points with those who say that their research poses a risk, then I side with the experts but still assign enough weight to the other side to conclude that they are doing important work nonetheless.
Just because one smart person believes that climate change is bunk I don’t believe it as well. All his achievements do not legitimate his views.
“Climate change is bunk” seems like a pretty terrible summary of Freeman Dyson’s position. If you disagree, a more specific criticism would be helpful. Freeman Dyson’s views on the topic mostly seem to be sensible to me.
And since I am yet too uneducated and do not have the time to evaluate all the data and calculations I am using the absurdity heuristic in combination with an appeal to authority to conclude that climate change is real.
Freeman Dyson agrees. The very first line from your reference reads: “Dyson agrees that anthropogenic global warming exists”.
I think that most researchers are not interested in fully autonomous AI (AI with persistent goals and a “self”) and more interested in human-augmented intelligence (meaning tools like data-mining software).
Intelligence augmentation can pay your bills today.
I like your point, but I wonder what doubts you have about the premise. Is an expected-utility-maximizer likely to be absurdly difficult to construct, or do you think all or almost all AI designers would prefer other designs? I think that AI designers would prefer such a design if they could have it, and “maximize my company’s profits” is likely to be the design objective.
I think that most researchers are not interested in fully autonomous AI (AI with persistent goals and a “self”) and more interested in human-augmented intelligence (meaning tools like data-mining software).
I do think that an expected utility-maximizer is the ideal in GAI. But, just like general purpose quantum computers, I believe that expected utility-maximizer’s that − 1) find it instrumentally useful to undergo recursive self-improvement 2) find it instrumentally useful to take over the planet/universe to protect their goals—are, if at all feasible, the end-product of a long chain of previous AI designs with no quantum leaps in-between. That they are at all feasible is dependent on 1) how far from the human level intelligence hits diminishing returns 2) that intelligence is more useful than other kinds of resources in stumbling upon unknown unknowns in solution space 3) that expected utility-maximizer’s and their drives are not fundamentally dependent on the precision with which their utility-function is defined.
I further believe that long before we get to the point of discovering how to build expected utility-maximizer’s, capable of undergoing explosive recursive self-improvement, we will have automatic scientists that can brute-force discoveries on hard problem in bio and nanotech and enable unfriendly humans to wreck havoc and control large groups of people. If we survive that, which I think is the top risk rather than GAI, then we might at some point be able to come up with an universal artificial intelligence. (ETA: Note that for automatic scientists to work well the goals need to be well-defined, which isn’t the case for intelligence amplification.)
I just don’t have enough background knowledge to conclude that it is likely that humans can stumble upon simple algorithms that could be improved to self-improve and then reach vastly superhuman capabilities. From my point of view that seems like pure speculation, although speculation that should be taken seriously and that does legitimate the existence of an organisation like SI. Which is the reason why I have donated a few times already. But from my uneducated point of view it seems unreasonable to claim that the possibility is obviously correct and that the arguments might not simply sound convincing.
I approach this problem the same way that I approach climate change. Just because one smart person believes that climate change is bunk I don’t believe it as well. All his achievements do not legitimate his views. And since I am yet too uneducated and do not have the time to evaluate all the data and calculations I am using the absurdity heuristic in combination with an appeal to authority to conclude that climate change is real. And the same goes for risks from AI. I can hardly evaluate universal AI research or understand approximation’s to AIXI. But if the very people who came up with it disagree on various points with those who say that their research poses a risk, then I side with the experts but still assign enough weight to the other side to conclude that they are doing important work nonetheless.
“Climate change is bunk” seems like a pretty terrible summary of Freeman Dyson’s position. If you disagree, a more specific criticism would be helpful. Freeman Dyson’s views on the topic mostly seem to be sensible to me.
Freeman Dyson agrees. The very first line from your reference reads: “Dyson agrees that anthropogenic global warming exists”.
Intelligence augmentation can pay your bills today.