I think it bears repeating here:
Influence is only one aspect of the moral formula; the other aspect is the particular context of values being promoted.
These can be quite independent, as with a tribal chief, with substantial influence, acting to promote the perceived values of his tribe, vs. the chief acting to promote his narrower personal values. [Note that the difference is not one of fitness but of perceived morality. Fitness is assessed only indirectly within an open context.]
Excellent advice Eliezer!
I have a game I play ever few months or so. I get on my motorcycle, usually on a Friday, pack spare clothes and toiletries, and head out in a random direction. At most every branch in the road I choose randomly, and take my time exploring and enjoying the journey. After a couple of days, I return hugely refreshed, creative potential flowing.
But we already live in a world, right now, where people are less in control of their social destinies than they would be in a hunter-gatherer band…
If you lived in a world the size of a hunter-gatherer band, then it would be easier to find something important at which to be the best—or do something that genuinely struck you as important, without becoming lost in a vast crowd of others with similar ideas.
Can you see the contradiction, bemoaning that people are now “less in control” while exercising ever-increasing freedom of expression? Harder to “find something important” with so many more opportunities available? Can you see the confusion over context that is increasingly not ours to control?
Eliezer, here again you demonstrate your bias in favor of the context of the individual. Dunbar’s (and others’) observations on organizational dynamics apply generally, while your interpretation appears to speak quite specifically of your experience of Western culture and your own perceived place in the scheme of things.
Plentiful contrary views exist to support a sense of meaning, purpose, pride implicit in the recognition of competent contribution to community without the (assumed) need to be seen as extraordinary. Especially still in modern Japan and Asia, the norm is to bask in recognition of competent contribution and to recoil from any suggestion that one might substantially stand out. False modesty this is not. In Western society too, examples of fulfillment and recognition through service run deeply, although this is belied in the (entertainment) media.
Within any society, recognition confers added fitness, but to satisfice it is not necessary to be extraordinary.
But if people keep getting smarter and learning more—expanding the number of relationships they can track, maintaining them more efficiently...[relative to the size of the interacting population]..then eventually there could be a single community of sentients, and it really would be a single community.
But as the cultural matrix keeps getting smarter—supporting increasing degrees of freedom with increasing probability—then eventually you could see self-similarity of agency over increasing scale, and it really would be a fractal agency.
Well, regardless of present point of view—wishing all a rewarding New Year!
Ironic, such passion directed toward bringing about a desirable singularity,
rooted in an impenetrable singularity of faith in X.
X yet to be defined, but believed to be [meaningful|definable|implementable] independent of future context.
It would be nice to see an essay attempting to explain an information or systems-theoretic basis supporting such an apparent contradiction (definition independent of context.)
Or, if the one is arguing for a (meta)invariant under a stable future context, an essay on the extended implications of such stability, if the one would attempt to make sense of “stability, extended.”
Or, a further essay on the wisdom of ishoukenmei, distinguishing between the standard meaning of giving one’s all within a given context, and your adopted meaning of giving one’s all within an unknowable context.
Eliezer, I recall that as a child you used to play with infinities. You know better now.
Coming from a background in scientific instruments, I always find this kind of analysis a bit jarring with its infinite regress involving the rational, self-interested actor at the core.
Of course two instruments will agree if they share the same nature, within the same environment, measuring the same object. You can map onto that a model of priors, likelihood function and observed evidence if you wish. Translated to agreement between two agents, the only thing remaining is an effective model of the relationship of the observer to the observed.
I’ll second jb’s request for denser, more highly structured representations of Eliezer’s insights. I read all this stuff, find it entertaining and sometimes edifying, but disappointing in that it’s not converging on either a central thesis or central questions (preferably both.)
Crap. Will the moderator delete posts like that one, which appear to be so off the Mark?
…but the self-taught will simply extend their knowledge when a lack appears to them.
Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on “will simply extend” but on “when a lack appears.”
In this forum, and others like it, we characteristically fail to distinguish between the relative ease of learning from the already abstracted explicit and latent regularities in our environment and the fundamentally hard (and increasingly harder) problem of extracting novelty of pragmatic value from an exponentially expanding space of possibilities.
Therein lies the problem—and the opportunity—of increasingly effective agency within an environment of even more rapidly increasing uncertainty. There never was or will be safety or certainty in any ultimate sense, from the point of view of any (necessarily subjective) agent. So let us each embrace this aspect of reality and strive, not for safety but for meaningful growth.
A few posters might want to read up on Stochastic Resonance, which was surprisingly surprising a few decades ago. I’m getting a similar impression now from recent research in the field of Compressive Sensing, which ostensibly violates the Nyquist sampling limit, highlighting the immaturity of the general understanding of information-theory.
In my opinion, there’s nothing especially remarkable here other than the propensity to conflate the addition of noise to data, with the addition of “noise” (a stochastic element) to (search for) data.
This confusion appears to map very well onto the cybernetic distinction between intelligently knowing the answer and intelligently controlling for the answer.
And that’s why I always say that the power of natural selection comes from the selection part, not the mutation part.
And the power of the internal combustion engine comes from the fuel part… Right, or at least, not even wrong. It seems that my congratulations a few months ago for your apparent immanent escape from simple reductionism were premature.
Above all else, be true to yourself. This doesn’t mean you must or should be bluntly open with everyone about your own thoughts and values; on the contrary, it means taking personal responsibility for applying your evolving thinking as a sharp instrument for the promotion of your evolving values.
Think of your values-complex as a fine-grained hierarchy, with some elements more fundamental and serving to support a wider variety of more dependent values. For example, your better health, both physical and mental, is probably more fundamental and necessary to support better relationships, and a relatively few deeper relationships will tend to support a greater variety of subsidiary values than would a larger number of more shallow relationships, and so on.
Of course no one can compute and effectively forecast the future in such complex terms, but to the extent you can clarify for yourself the broad outlines, in principle, of (1) your values and (2) your thinking on how to promote those values into the future you create, then you’ll tend to proceed in the direction of increasing optimality. Wash, rinse, repeat.
We wish you the best. Your efforts toward increasingly intelligent creation of an increasingly desirable world contribute to us all.
In my opinion, EY’s point is valid—to the extent that the actor and observer intelligence share neighboring branches of their developmental tree. Note that for any intelligence rooted in a common “physics”, this says less about their evolutionary roots and more about their relative stages of development.
Reminds me a bit of the jarred feeling I got when my ninth grade physics teacher explained that a scrambled egg is a clear and generally applicable example of increased entropy. [Seems entirely subjective to me, in principle.] Also reminiscent of Kardashev with his “obvious” classes of civilization, lacking consideration of the trend toward increasing ephemeralization of technology.
@pk I don’t understand. Am I too dumb or is this gibberish?
It’s not so complicated; it’s just that we’re so formal...
It might be worthwhile to note that cogent critiques of the proposition that a machine intelligence might very suddenly “become a singleton Power” do not deny the inefficacies of the human cognitive architecture offering improvement via recursive introspection and recoding, nor do they deny the improvements easily available via hardware substitution and expansion of more capable hardware and I/O.
The do, however, highlight the distinction between a vastly powerful machine madly exploring vast reaches of a much vaster “up-arrow” space of mathematical complexity, and a machine of the same power bounded in growth of intelligence—by definition necessarily relevant—due to starvation for relevant novelty in its environment of interaction.
If, Feynman-like, we imagine the present state of knowledge about our world in terms of a distribution of vertical domains, like silos, some broader with relevance to many diverse facets of real-world interaction, some thin and towering into the haze of leading-edge mathematical reality, then we can imagine the powerful machine quickly identifying and making a multitude of latent connections and meta-connections, filling in the space between the silos and even somewhat above—but to what extent, given the inevitable diminishing returns among the latent, and the resulting starvation for the novel?
Given such boundedness, speculation is redirected to growth in ecological terms, and the Red Queen’s Race continues ever faster.
Frelkins and Marshall pretty well sum up my impressions of the exchange between Jaron and EY.
Perhaps pertinent, I’d suggest an essay on OvercomingBias on our unfortunate tendency to focus on the other’s statements, rather than focusing on a probabilistic model of the likelihood function generating those statements. Context is crucial to meaning, but must be formed rather than conveyed. Ironically—but reflecting the fundamentally hard value of intelligence—such contextual asymmetry appears to work against those who would benefit the most.
More concretely, I’m referring to the common tendency to shake one’s head in perplexity and say “He was so wrong, he didn’t make much sense at all.” in comparison with laughing and saying “I can see how he thinks that way, within his context (which I may have once shared.)”
My (not so “fake”) hint:
Think economics of ecologies. Coherence in terms of the average mutual information of the paths of trophic I/O provides a measure of relative ecological effectiveness (absent prediction or agency.) Map this onto the information I/O of a self-organizing hierarchical Bayesian causal model (with, for example, four major strata for human-level environmental complexity) and you should expect predictive capability within a particular domain, effective in principle, in relation to the coherence of the hierarchical model over its context.
As to comparative evaluation of the intelligence of such models without actually running them, I suspect this is similar to trying to compare the intelligence of phenotypical organisms by comparing the algorithmic complexity of their DNA.
@Tim Tyler: “That’s no reason not to talk about goals, and instead only mention something like “utility”.”
Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical underspecification of priors, we find that ultimately (in the bigger picture) rational decision-making is not so much about “expected utility” or “goals” as it is about promoting a present model of evolving values into one’s future, via increasingly effective interaction with one’s (necessarily local) environment of interaction. Wash, rinse, repeat. Certainty, goals, and utility are always only a special case, applicable to the extent that the context is adequately specifiable. This is the key to so-called “paradoxes” such a Prisoners’s Dilemma and Parfit’s Repugnant Conclusion as well.
Tim, this forum appears to be over-heated and I’m only a guest here. Besides, I need to pack and get on my motorcycle and head up to San Jose for Singularity Summit 08 and a few surrounding days of high geekdom.
I’m (virtually) outta here.
@Eliezer: There’s emotion involved. I enjoy calling people’s bluffs.
Jef, if you want to argue further here, I would suggest explaining just this one phrase “functional self-similarity of agency extended from the ‘individual’ to groups”.
Eliezer, it’s clear that your suggestion isn’t friendly, and I intended not to argue, but rather, to share and participate in building better understanding. But you’ve turned it into a game which I can either play, or allow you to use it against me. So be it.
The phrase is a simple one, but stripped of context, as you’ve done here, it may indeed appear meaningless. So to explain, let’s first restore context.
Your essay, Which Parts are “Me”, highlighted some interesting and significant similarities—and differences—in our thinking. Interesting, because they match an epistemological model I held tightly and would still defend against simpler thinking, and significant, because a coherent theory of self, or rather agency, is essential to a coherent meta-ethics.
So I wrote (after trying to establish some similarity of background):
“At some point about 7 years later (about 1985) it hit me one day that I had completely given up belief in an essential “me”, while fully embracing a pragmatic “me”. It was interesting to observe myself then for the next few years; every 6 months or so I would exclaim to myself (if no one else cared to listen) that I could feel more and more pieces settling into a coherent and expanding whole. It was joyful and liberating in that everything worked just as before, but I had to accommodate one less hypothesis, and certain areas of thinking, meta-ethics in particular, became significantly more coherent and extensible. [For example, a piece of the puzzle I have yet to encounter in your writing is the functional self-similarity of agency extended from the “individual” to groups.]”
So I offered a hint, of an apparently unexplored (for you) direction of thought, which, given a coherent understanding of the functional role of agency, might benefit your further thinking on meta-ethics.
The phrase represents a simple concept, but rests on a subtle epistemic foundation which, as Mathew C pointed out, tends to bring out vigorous defenses in support of the Core Self. Further to the difficulty, an epistemic foundation cannot be conveyed, but must be created in the mind of thinker as described pretty well recently by Meltzer in a paper that “stunned” Robin Hanson, entitled Pedagogical Motives for Esoteric Writing. So, the phrase is simple, but the meaning depends on background, and along the road to acquiring that background, there is growth.
To break it down: “Functional self-similarity of agency extended from the ‘individual’ to groups.”
“Functional” indicates that I’m referring to similarity in terms of function, i.e. relations of output to input, rather than e.g. similarities of implementation, structure, or appearance. More concretely [I almost neglected to include the concrete.] I’m referring to the functional aspects of agency, in essence, action on behalf of perceived interests (an internal model of some sort) in relation to which the agent acts on its immediate environment so as to (tend to) null out any differences.
“Self-similarity” refers to some entity replicated, conserved, re-used over a range of scale. More concretely, I’m referring to patterns of agency which repeat—in functional terms, even though the implementation may be quite different in structure, substrate, or otherwise.
“Extended from the individual to groups” refers to the scale of the subject, in other words, that functional self-similarity of agency is conserved over increasing scale from the common and popularly conceived case of individual agency, extending to groups, groups of groups, and so on. More concretely, I’m referring to the essential functional similarities, in terms of agency, which are conserved when a model scales for example, from individual human acting on its interests, to a family acting on its interests, to tribe, company, non-profit, military unit, city-state, etc. especially in terms of the dynamics of its interactions with entities of similar (functional) scale, but also with regard to the internal alignments (increasing coherence) of its own nature due to selection for “what works.”
As you must realize, regularities observed over increasing scale tend to indicate and increasingly profound principle. That was the potential value I offered to you.
In my opinion, the foregoing has a direct bearing on a coherent meta-ethics, and is far from “fake”. Maybe we could work on “increasing coherence with increasing context” next?
Mathew C: “And the biggest threat, of course, is the truth that the self is not fundamentally real. When that is clearly seen, the gig is up.”
Spot on. That is by far the biggest impasse I have faced anytime I try to convey a meta-ethics denying the very existence of the “singularity of self” in favor of the self of agency over increasing context. I usually to downplay this aspect until after someone has expressed a practical level of interest, but it’s right there out front for those who can see it.
Thanks. Nice to be heard...
Based on the disproportionate reaction from our host, I’m going to sit quietly now.
@Cyan: ”… you’re going to need more equations and fewer words.”
Don’t you see a lower-case sigma representing a series every time I say “increasingly”? ;-)
Seriously though, I read a LOT of technical papers and it seems to me much of the beautiful LaTex equations and formulas are only to give the impression of rigor. And there are few equations that could “prove” anything in this area of inquiry.
What would help my case, if it were not already long lost in Eliezer’s view, is to have provided examples, references, and commentary along with each abstract formulation. I lack the time to do so, so I’ve always considered my “contributions” to be seeds of thought to grow or not depending on whether they happen to find fertile soil.