[LINK] David Deutsch on why we don’t have AGI yet “Creative Blocks”

Folks here should be fa­mil­iar with most of these ar­gu­ments. Put­ting some in­ter­est­ing quotes be­low:

http://​​aeon.co/​​mag­a­z­ine/​​be­ing-hu­man/​​david-deutsch-ar­tifi­cial-in­tel­li­gence/​​

“Creative blocks: The very laws of physics im­ply that ar­tifi­cial in­tel­li­gence must be pos­si­ble. What’s hold­ing us up?”

Re­mem­ber the sig­nifi­cance at­tributed to Skynet’s be­com­ing ‘self-aware’? [...] The fact is that pre­sent-day soft­ware de­vel­op­ers could straight­for­wardly pro­gram a com­puter to have ‘self-aware­ness’ in the be­havi­oural sense — for ex­am­ple, to pass the ‘mir­ror test’ of be­ing able to use a mir­ror to in­fer facts about it­self — if they wanted to. [...] AGIs will in­deed be ca­pa­ble of self-aware­ness — but that is be­cause they will be General

Some hope to learn how we can rig their pro­gram­ming to make [AGIs] con­sti­tu­tion­ally un­able to harm hu­mans (as in Isaac Asi­mov’s ‘laws of robotics’), or to pre­vent them from ac­quiring the the­ory that the uni­verse should be con­verted into pa­per clips (as imag­ined by Nick Bostrom). None of these are the real prob­lem. It has always been the case that a sin­gle ex­cep­tion­ally cre­ative per­son can be thou­sands of times as pro­duc­tive — eco­nom­i­cally, in­tel­lec­tu­ally or what­ever — as most peo­ple; and that such a per­son could do enor­mous harm were he to turn his pow­ers to evil in­stead of good.[...] The bat­tle be­tween good and evil ideas is as old as our species and will go on re­gard­less of the hard­ware on which it is running

He also says con­fus­ing things about in­duc­tion be­ing in­ad­e­quate for cre­ativity which I’m guess­ing he couldn’t sup­port well in this short es­say (per­haps he ex­plains bet­ter in his books). Not quot­ing here. His at­tack on Bayesi­anism as an ex­pla­na­tion for in­tel­li­gence is valid and in­ter­est­ing, but could be wrong. Given what we know about neu­ral net­works, some­thing like this does hap­pen in the brain, and pos­si­bly even at a con­cept level.

The doc­trine as­sumes that minds work by as­sign­ing prob­a­bil­ities to their ideas and mod­ify­ing those prob­a­bil­ities in the light of ex­pe­rience as a way of choos­ing how to act. This is es­pe­cially per­verse when it comes to an AGI’s val­ues — the moral and aes­thetic ideas that in­form its choices and in­ten­tions — for it al­lows only a be­havi­ouris­tic model of them, in which val­ues that are ‘re­warded’ by ‘ex­pe­rience’ are ‘re­in­forced’ and come to dom­i­nate be­havi­our while those that are ‘pun­ished’ by ‘ex­pe­rience’ are ex­tin­guished. As I ar­gued above, that be­havi­ourist, in­put-out­put model is ap­pro­pri­ate for most com­puter pro­gram­ming other than AGI, but hope­less for AGI.

His fi­nal con­clu­sions are dis­agree­able. He some­how con­cludes that the prin­ci­pal bot­tle­neck in AGI re­search is a philo­soph­i­cal one.

In his last para­graph, he makes the fol­low­ing con­tro­ver­sial state­ment:

For yet an­other con­se­quence of un­der­stand­ing that the tar­get abil­ity is qual­i­ta­tively differ­ent is that, since hu­mans have it and apes do not, the in­for­ma­tion for how to achieve it must be en­coded in the rel­a­tively tiny num­ber of differ­ences be­tween the DNA of hu­mans and that of chim­panzees.

This would be false if, for ex­am­ple, the mother con­trols gene ex­pres­sion while a foe­tus de­vel­ops and helps shape the brain. We should be able to an­swer this ques­tion defini­tively once we can grow hu­man ba­bies com­pletely in vitro. Another prob­lem would be the im­pact of the cul­tural en­vi­ron­ment. A way to an­swer this ques­tion would be to see if our Stone Age an­ces­tors would be clas­sified as AGIs un­der a rea­son­able definition