No, there’s no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.
However, one of the problems needed to be solved for FAI (stable self-modification) could certainly make an FAI’s rate of self-improvement faster than a comparative AI which has not solved that problem. There are other questions that need to be answered there (does the AI realize that modifications will go wrong and therefore not self-modify? If it’s smart enough to notice the problem, won’t it’s first step be to solve it?), and I may be off base here.
I’m not sure it’s that useful to talk about an FAI vs a analogue UFAI, though. If an FAI is built, there will be many significant differences between the resulting intelligence and the one that would have been built if the FAI was not, simply due to the different designers. In terms of functioning, the different design choices, even those not relevant to FAI (if that’s even meaningful—FAI may well need to be so fully integrated that all the aspects are made with it in mind), may be radically different depending on the designer and are likely to have most of the effect you’re talking about.
In other words, we don’t know shit about what the first AGI might look like, and we certainly don’t know enough to do detailed separate counterfactuals
No, there’s no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.
I believe you have this backwards—the OP is asking whether a FAI would be worse at learning than an UFAI, because of additional constraints on its improvement. If so:
then a non Friendly AI would eventually (possibly quite quickly) become smarter than any FAI built.
Of course one of the first actions of a FAI would be to prevent any UFAI from being built at all.
If the rate of learning of an AGI is t then is it correct to assume that the rate of learning of a FAI would be t+x where x > 0,
Which says the FAI is learning faster. But that would make more sense of the last paragraph.
I may have a habit of assuming that the more precise formulation of a statement is the intended/correct interpretation, which, while great in academia and with applied math, may not be optimal here.
Read “rate of learning” as “time it takes to learn 1 bit of information”
So UFAI can learn 1 bit in time T, but a FAI takes T+X
Or, at least, that’s how I read it, because the second paragraph makes it pretty clear that the author is discussing UFAI outpacing FAI. You could also just read it as a typo in the equation, but “accidentally miswrote the entire second paragraph” seems significantly less likely. Especially since “Won’t FAI learn faster and outpace UFAI” seems like a pretty low probability question to begin with...
Erm… hi, welcome to the debug stack for how I reached that conclusion. Hope it helps ^.^
No, there’s no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.
However, one of the problems needed to be solved for FAI (stable self-modification) could certainly make an FAI’s rate of self-improvement faster than a comparative AI which has not solved that problem. There are other questions that need to be answered there (does the AI realize that modifications will go wrong and therefore not self-modify? If it’s smart enough to notice the problem, won’t it’s first step be to solve it?), and I may be off base here.
I’m not sure it’s that useful to talk about an FAI vs a analogue UFAI, though. If an FAI is built, there will be many significant differences between the resulting intelligence and the one that would have been built if the FAI was not, simply due to the different designers. In terms of functioning, the different design choices, even those not relevant to FAI (if that’s even meaningful—FAI may well need to be so fully integrated that all the aspects are made with it in mind), may be radically different depending on the designer and are likely to have most of the effect you’re talking about.
In other words, we don’t know shit about what the first AGI might look like, and we certainly don’t know enough to do detailed separate counterfactuals
I believe you have this backwards—the OP is asking whether a FAI would be worse at learning than an UFAI, because of additional constraints on its improvement. If so:
Of course one of the first actions of a FAI would be to prevent any UFAI from being built at all.
I assumed otherwise because of :
Which says the FAI is learning faster. But that would make more sense of the last paragraph.
I may have a habit of assuming that the more precise formulation of a statement is the intended/correct interpretation, which, while great in academia and with applied math, may not be optimal here.
Read “rate of learning” as “time it takes to learn 1 bit of information”
So UFAI can learn 1 bit in time T, but a FAI takes T+X
Or, at least, that’s how I read it, because the second paragraph makes it pretty clear that the author is discussing UFAI outpacing FAI. You could also just read it as a typo in the equation, but “accidentally miswrote the entire second paragraph” seems significantly less likely. Especially since “Won’t FAI learn faster and outpace UFAI” seems like a pretty low probability question to begin with...
Erm… hi, welcome to the debug stack for how I reached that conclusion. Hope it helps ^.^