In other words you try to legislate your actions. But your subconscious will find loopholes and enforcement will slip.
sark
The proper use of regret?
Why not both useful beliefs and useful emotions?
Why privilege beliefs?
anchoring for coordination
Hi. First of all thanks for the immensely helpful summary of the literature!
Since you have gone through so much of the literature, I was wondering if you have come across any theories about the functional role of happiness?
I’m currently only aware of Kaj Sotala’s post some time ago about how happiness regulates risk-taking. I personally think happiness does this because risk-taking is socially advantageous for high status folks. The theory is that happiness is basically a behavioural strategy pursued by those who have high status. As in, happiness is performed, not pursued. Depression and anxiety would be the opposite of happiness. I remember some studies showing how in primates the low status ones exhibit depression-like and anxious behavior.
It may simply be my ignorance of the literature, but it seems strange that all these (otherwise wonderful) empirical investigations into happiness are motivated only by a common folk theory of its function.
Is it purely a numbers game though? Most people have this thing nerdy academics call a ‘mate value sociometer’ and they use it to help decide how hot a female to pursue. Of course, this sociometer has to be calibrated, so you really want to be rejected often enough to know where you stand. My point is, it might be better to keep this sociometer in mind (especially since non-neurotypicals tend not to have this instinct), to at first target your proposals to be as informative as possible, and then later on target those girls your mate value can buy. (this is in fact what studies have found neurotypicals to be doing)
Edinburgh LW Meetup Saturday April 16th
Even a lie is a psychic fact. --Carl Jung, (1875-1961)
I find that helpful in reminding myself that beliefs, even false ones, can be causal.
Question about Large Utilities and Low Probabilities
I’m not so sure we accord Kaj less status overall for having taking more years to graduate and more status for helping Eliezer write that book. Are we so sure we do? We might think so, and then reveal otherwise by our behavior.
In those examples, it seems to me they were mistaken about how they perceived something rather than what they perceive, the ‘implementation detail’ of the experience, rather than its content.
Most of the time we just experience things, and we don’t think about via which modality we do so. This is not surprising, as unless when explicitly called for most of the time such knowledge would be quite useless. When you are blind, it’s likely there comes a time you wonder, or were asked, how you managed to navigate as well as you do. Here you will apply some lousy introspection and your brain will serve up some lousy post-hoc ‘explanation’. Thereafter, that hypothesis will just become an additional belief you have about what is going on with your perception.
Of course, modality seems quite intrinsic to various qualia. ‘Red’ is obviously a visual thing. ‘Birds chirping’ obviously an auditory thing. But the understanding of ‘red’ as visual is a meta cognitive process separate from the visual experience of ‘red’ itself. For example you expect ‘red’ to be amenable to being painted on the surface of an object, when the same is not possible for ‘chirping’. So no, the modality is not part of the content of the subjective experience.
I would put it this way: You can be wrong about what your experience is referring to out there in the world or elsewhere in your body or mind. But you cannot be wrong about the contents of your immediate experience.
Hmm, I don’t happen to find your argument very convincing. I mean, what it does is to pay attention to some aspect of the original mistaken statement, then find another instance sharing that aspect which is transparently ridiculous.
But is this sufficient? You can model the statement “apples and oranges are good fruits” in predicate logic as “for all x, Apple(x) or Orange(x) implies Good(x)” or in propositional logic as “A and O” or even just “Z”. But it should really depend on what aspect of the original statement you want to get at. You want a model which captures precisely those aspects you want to work with.
So your various variables actually confused the hell outta me there. I was trying to match them up with the original statement and your reductio example. All the while not really understanding which was relevant to the confusion. It wasn’t a pleasant experience :(
It seems to me much simpler to simply answer: “Turing machine-ness has no bearing on moral worth”. This I think gets straight to the heart of the matter, and isolates clearly the confusion in the original statement.
Or further guess at the source of the confusion, the person was trying to think along the lines of: “Turing machines, hmm, they look like machines to me, so all Turing machines are just machines, like a sewing machine, or my watch. Hmm, so humans are Turing machines, but by my previous reasoning this implies humans are machines. And hmm, furthermore, machines don’t have moral worth… So humans don’t have moral worth! OH NOES!!!”
Your argument seems like one of those long math proofs which I can follow step by step but cannot grasp its overall structure or strategy. Needless to say, such proofs aren’t usually very intuitively convincing.
(but I could be generalizing from one example here)
Deontology treats morality as terminal. Consequentialism treats morality as instrumental.
Is this a fair understanding of deontology? Or is this looking at deontology through a consequentialism lens?
Supernaturalism is a distraction. Theologists defend supernaturalism as an indirect way of defending whatever God they want to believe in. See http://www.uncrediblehallq.net/2011/06/24/atheism-is-just-thinking-there-arent-any-gods/.
The sequences are not specifically tailored to convince people of atheism. They are rather a more general set of tools in going about and reasoning about the world. So don’t over-ascribe relevance to atheism many of the philosophical ideas you see in there.
Err no! He says that ‘real’ means something like causally accessible from where we are. It’s something like “from my perspective I am real, but from the perspective of a fictional-me in a fictional-universe, I am not, while the fictional me is real”. Except this is not a very helpful way to define ‘real’. There is no meta-realness, but relativistic-realness is quite as useless. Drescher dissolves the issue, by reducing ‘real’ to something like “whatever we can possibly get at from where we are in this universe”.
According to this paper, those skills that most highly correlate with g are those with the lowest environmental variance. Working memory being the best illustration of this, its correlation so high that some researchers want to equate it to g.
According to this paper, genetic variance in intelligence is maintained by mutation-selection balance. This means it is a quantitative trait with a large number of tiny genetic factors influencing its overall value, making it a good fitness indicator. Hence we can think of intelligence as overall mental condition/health. It is unlikely that intelligence has any one underlying cause or mechanism, or even a few with large influence.
So you have two strategies for a good measure of intelligence, tasks with low environmental variance, tasks which tap diverse mental skills. Pretty much what the existing various IQ tests have set out to do.
As for success in various pursuits, I say rely on your overall assessment of the intelligence of the person. Of course, don’t forget creativity, discipline, drive, etc. which can be equally important. Beyond this, you’d have to go into the specific details of the particular pursuit, perhaps it requires specialized mental skills, quirky psychological profiles etc.
As for training intelligence, forget it. Even transfer of learning doesn’t work. You are best if you focused your training on specific skills integral to the tasks involved in achieving your goals.
I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.
Why would evolution come up with a fully general solution against such ‘bugs in our utility functions’?
Take addiction to a substance X. Evolution wouldn’t give us a psychological capacity to inspect our utility functions and to guard against such counterfeit utility. It would simply give us a distaste for substance X.
My guess is that we have some kind of self-referential utility function. We do not only want what our utility functions tell us we want. We also want utility (happiness) per se. And this want is itself included in that utility function!
When thinking about wireheading I think we are judging a tradeoff, between satisfying mere happiness and the states of affairs which we prefer (not including happiness).
“Beliefs shoulder the burden of having to reflect the territory, while emotions don’t.”
This is how I have come to think of beliefs. It’s like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I’m perfectly happy to call that “truth”. So long as that does not break my tools.
Perhaps not what most religious folks would call its ‘essence’ (part of the problem that they won’t admit this really) but certain religion-based social norms which are still relevant in today’s world.
House: There’s never any proof. Five different doctors come up with five different diagnoses based on the same evidence.
Cuddy: You don’t have any evidence. And nobody knows anything, huh? How is it you always think you’re right?
House: I don’t. I just find it hard to operate on the opposite assumption.