Tononi’s Phi theory seems somewhat relevant, though it only addresses consciousness and explicitly avoids valence. It does seem like something that could be adapted toward answering questions like this (somehow).
Current models of emotion based on brain architecture and neurochemicals (e.g., EMOCON) are relevant, though ultimately correlative and thus not applicable outside of the human brain.
There’s also a great deal of quality literature about specific correlates of pain and happiness- e.g., Building a neuroscience of pleasure and well-being and An fMRI-Based Neurologic Signature of Physical Pain.
In short, I’ve found plenty of plenty of research around the topic but nothing that’s particularly predictive outside of very constrained contexts. No generalized theories. There’s some interesting stuff happening around panpsychism (e.g., see these two pieces by Chalmers) but they focus on consciousness, not valence.
My intuition is valence will be encoded within frequency dynamics in a way that will be very amiable to mathematical analysis, but right now I’m seeking clarity about how to speak about the problem.
Edit: I’ll add this to the bottom of the post
I’m Mike Johnson. I’d estimate I come across a reference to LW from trustworthy sources every couple of weeks, and after working my way through the sequences it feels like the good outweighs the bad and it’s worth investing time into.
My background is in philosophy, evolution, and neural nets for market prediction; I presently write, consult, and am in an early-stage tech startup. Perhaps my highwater mark in community exposure has been a critique of the word Transhumanist at Accelerating Future. In the following years, my experience has been more mixed, but I appreciate the topics and tools being developed even if the community seems a tad insular. If I had to wear some established thinkers on my sleeve I’d choose Paul Graham, Lawrence Lessig, Steve Sailer, Gregory Cochran, Roy Baumeister, and Peter Thiel. (I originally had a comment here about having an irrational attraction toward humility, but on second thought, that might rule out Gregory “If I have seen farther than others, it’s because I’m knee-deep in dwarves” Cochran… Hmm.)
Cards-on-the-table, it’s my impression that
(1) Lesswrong and SIAI are doing cool things that aren’t being done anywhere else (this is not faint praise);
(2) The basic problem of FAI as stated by SIAI is genuine;
(3) SIAI is a lightning rod for trolls and cranks, which is really detrimental to the organization (the metaphor of autoimmune disease comes to mind) and seems partly its own fault;
(4) Much of the work being done by SIAI and LW will turn out to be a dead-end. Granted, this is true everywhere, but in particular I’m worried that axiomatic approaches to verifiable friendliness will prove brittle and inapplicable (I do not currently have an alternative);
(5) SIAI has an insufficient appreciation for realpolitik;
(6) SIAI and LW seem to have a certain distaste for research on biologically-inspired AGI, due in parts to safety concerns, an organizational lack of expertise in the area, and (in my view) ontological/metaphysical preference. I believe this distaste is overly limiting and also leads to incorrect conclusions.
Many of these impressions may be wrong. I aim to explore the site, learn, change my mind if I’m wrong, and hopefully contribute. I appreciate the opportunity, and I hope my unvarnished thoughts here haven’t soured my welcome. Hello!