So in order for an AGI to be recognized as intelligent, it would have to share with us a familiarity with the world. It is impossible to program this in, or in any way assemble such familiarity.
It’s probably extremely infeasible, sure, but impossibility is really a stronger claim in computer science. So now I’m thinking whether you actually think it’s literally impossible to write some types of programs just because they are likely to be quite long and complex, or whether you’re just using sloppy language to proceed to arguments of questionable integrity (“Thus, it is of course also impossible...”).
Doesn’t seem really insightful otherwise either. Assumes the genetic complexity in humans with all sorts of built-in propensity for social dynamics is equivalent to a blank-slate AGI, when that part seems to be where the difficulty of the FAI problem lies. Once a working AGI is up and running at the newborn baby analogue stage, you’d already better have a pretty deep and correct friendliness solution done and implemented in it. A newborn Clippy will just politely observe all your extremely skilled pedagogy for raising human children into ethical adults, play along as long as it needs to, and then kill you and grind your body up for the 3.5 paperclips it can make from the trace iron in it without a second thought.
A newborn Clippy will just politely observe all your extremely skilled pedagogy for raising human children into ethical adults, play along as long as it needs to, and then kill you and grind your body up for the 3.5 paperclips it can make from the trace iron in it without a second thought.
Though it should be noted that since the trace metals in a human body are actually rare chemical isotopes that will cause any paperclips made of them to quickly and irreversibly decay into a fine mist of non-paperclip particles, any Clippy that actually considered grinding up humans for paperclip elements would obviously be a very naive and foolish Clippy.
Lost me at
It’s probably extremely infeasible, sure, but impossibility is really a stronger claim in computer science. So now I’m thinking whether you actually think it’s literally impossible to write some types of programs just because they are likely to be quite long and complex, or whether you’re just using sloppy language to proceed to arguments of questionable integrity (“Thus, it is of course also impossible...”).
Doesn’t seem really insightful otherwise either. Assumes the genetic complexity in humans with all sorts of built-in propensity for social dynamics is equivalent to a blank-slate AGI, when that part seems to be where the difficulty of the FAI problem lies. Once a working AGI is up and running at the newborn baby analogue stage, you’d already better have a pretty deep and correct friendliness solution done and implemented in it. A newborn Clippy will just politely observe all your extremely skilled pedagogy for raising human children into ethical adults, play along as long as it needs to, and then kill you and grind your body up for the 3.5 paperclips it can make from the trace iron in it without a second thought.
Though it should be noted that since the trace metals in a human body are actually rare chemical isotopes that will cause any paperclips made of them to quickly and irreversibly decay into a fine mist of non-paperclip particles, any Clippy that actually considered grinding up humans for paperclip elements would obviously be a very naive and foolish Clippy.