Thank you.
I’d frame why I think biology matters in FAI research in terms of research applicability and toolbox dividends.
On the first reason—applicability—I think more research focus on biologically-inspired AGI would make a great deal of sense is because the first AGI might be a biologically-inspired black box, and axiom-based FAI approaches may not particularly apply to such. I realize I’m (probably annoyingly) retreading old ground here with regard to which method will/should win the AGI race, but SIAI’s assumptions seem to run counter to the assumptions of the greater community of AGI researchers, and it’s not obvious to me the focus on math and axiology isn’t a simple case of SIAI’s personnel backgrounds being stacked that way. ‘If all you have is a hammer,’ etc. (I should reiterate that I don’t have any alternatives to offer here and am grateful for all FAI research.)
The second reason I think biology matters in FAI research—toolbox dividends—might take a little bit more unpacking. (Forgive me some imprecision, this is a complex topic.)
I think it’s probable that anything complex enough to deserve the term AGI would have something akin to qualia/emotions, unless it was specifically designed not to. (Corollary: we don’t know enough about what Chalmers calls “psychophysical laws” to design something that lacks qualia/emotions.) I think it’s quite possible that an AGI’s emotions, if we did not control for their effects, could produce complex feedback which would influence its behavior in unplanned ways (though perfectly consistent with / determined by its programming/circuitry). I’m not arguing for a ghost in the machine, just that the assumptions which allow us to ignore what an AGI ‘feels’ when modeling its behavior may prove to be leaky abstractions in the face of the complexity of real AGI.
Axiological approaches to FAI don’t seem to concern themselves with psychophysical laws (modeling what an AGI ‘feels’), whereas such modeling seems a core tool for biological approaches to FAI. I find myself thinking being able to model what an AGI ‘feels’ will be critically important for FAI research, even if it’s axiom/math-based, because we’ll be operating at levels of complexity where the abstractions we use to ignore this stuff can’t help but leak. (There are other toolbox-based arguments for bringing biology into FAI research which are a lot simpler than this one, but this is on the top of my list.)
I’m Mike Johnson. I’d estimate I come across a reference to LW from trustworthy sources every couple of weeks, and after working my way through the sequences it feels like the good outweighs the bad and it’s worth investing time into.
My background is in philosophy, evolution, and neural nets for market prediction; I presently write, consult, and am in an early-stage tech startup. Perhaps my highwater mark in community exposure has been a critique of the word Transhumanist at Accelerating Future. In the following years, my experience has been more mixed, but I appreciate the topics and tools being developed even if the community seems a tad insular. If I had to wear some established thinkers on my sleeve I’d choose Paul Graham, Lawrence Lessig, Steve Sailer, Gregory Cochran, Roy Baumeister, and Peter Thiel. (I originally had a comment here about having an irrational attraction toward humility, but on second thought, that might rule out Gregory “If I have seen farther than others, it’s because I’m knee-deep in dwarves” Cochran… Hmm.)
Cards-on-the-table, it’s my impression that
(1) Lesswrong and SIAI are doing cool things that aren’t being done anywhere else (this is not faint praise);
(2) The basic problem of FAI as stated by SIAI is genuine;
(3) SIAI is a lightning rod for trolls and cranks, which is really detrimental to the organization (the metaphor of autoimmune disease comes to mind) and seems partly its own fault;
(4) Much of the work being done by SIAI and LW will turn out to be a dead-end. Granted, this is true everywhere, but in particular I’m worried that axiomatic approaches to verifiable friendliness will prove brittle and inapplicable (I do not currently have an alternative);
(5) SIAI has an insufficient appreciation for realpolitik;
(6) SIAI and LW seem to have a certain distaste for research on biologically-inspired AGI, due in parts to safety concerns, an organizational lack of expertise in the area, and (in my view) ontological/metaphysical preference. I believe this distaste is overly limiting and also leads to incorrect conclusions.
Many of these impressions may be wrong. I aim to explore the site, learn, change my mind if I’m wrong, and hopefully contribute. I appreciate the opportunity, and I hope my unvarnished thoughts here haven’t soured my welcome. Hello!