Listen to actual conversation sometime, most of it is excruciatingly boring if you think about it in terms of information. But as other posters have pointed out, most conversation is about social bonding, not exchanging information.
billswift
Or for representing phenomena in an altered “format”. For example, I have read a description of the bimetallic spring in a thermostat as a model of the room’s temperature presented in a way that the furnace can make use of it.
Humans normally get away with their biases by not examining them closely, and when the biases are pointed out to them by denying that they, personally are biased. Willful ignorance and denial of reality seem to be two of the most common human mental traits.
That has a link to a new article by Sylvia Engdahl who has written on the importance of space for years, http://www.sylviaengdahl.com/space.htm
I think this would be the most useful, even if it was only partially completed, since even a partial database would help greatly with both finding previously unrecognized biases and with the logic checking AI. It may even make the latter possible without the natural language understanding that Nancy thinks would likely be needed for it.
I criticize FAI because I don’t think it will work. But I am not at all unhappy that someone is working on it, because I could be wrong or their work could contribute to something else that does work even if FAI doesn’t (serendipity is the inverse of Murphy’s law). Nor do I think they should spread their resources excessively by trying to work on too many different ideas. I just think LessWrong should act more as a clearinghouse for other, parallel ideas, such as intelligence amplification, that may prevent a bad Singularity in the absence of FAI.
Everybody does that anyway, it is usually called second-guessing yourself. The best rule is to not decide under pressure unless you really have to, take the time to think things through.
it depends upon your past self having more information than your current self.
Or maybe you just spent more time thinking it through before. “Never doubt under pressure what you have calculated at leisure.” I think that previous states should have some influence on your current choices. As the link says:
If your evidence may be substantially incomplete you shouldn’t just ignore sunk costs—they contain valuable information about decisions you or others made in the past, perhaps after much greater thought or access to evidence than that of which you are currently capable.
I see you found yet another problem, with no way to get more utilons you die when those in the box are used up. And utility theory says you need utility to live, not just to give you reason to live.
There are no other ways to get utilons.
Is a weakness in your argument. Either you can survive without utilons, a contradiction to utility theory, or you wait until your “pre-existing” utilons are used up and you need more to survive.
Even worse, unlike your examples, rationality isn’t a single, focussed “skillset”, but a broad collection of barely related skills. Learning to avoid base rate neglect helps little if at all with avoiding honoring sunk costs which helps little with avoiding the narrative fallacy. You need to tackle them almost independently. That is one reason why I tend to emphasize the need to stop and think, when you can. Even if you have not mastered the particular fallacy that may be about to trip you up, you are more likely to notice a potential problem if you get in the habit of thinking through what you are doing.
they are always too brittle and inflexible to carry you on in any meaningful, long-term sort of way.
What you need to do is to capture it, then use it to help you take the next step; then keep taking those next steps.
The very first thing you need to do is to STOP reading, write down whatever caused your epiphany, and think about the next step. Too much of the self-help and popular psychological literature are written like stories, which, while make them more readable and more likely to be read, tends to encourage readers to keep on reading through it all. If you are reading for change, you need to read it like a textbook, for the information, rather than entertainment.
Studies against the effectiveness of preventative medicine aren’t new, they have been published repeatedly for decades, I have read several myself as early as 1993. And of course the RAND study that Robin discussed repeatedly.
I don’t know if it may help develop a helpful phrase, but another thing to keep in mind is that the link between what information you have and the problem you want to solve is often not obvious. You often need to play around with the information before you can figure out how it can be used to solve the problem.
And the complexity of real world problems can confuse the issue even more, so it helps to try to simplify or generalize the problem, so you can see what the core of the problem actually is, first.
Next we come to what I’ll call the epistemic-skeptical anti-intellectual. His complaint is that intellectuals are too prone to overestimate their own cleverness and attempt to commit society to vast utopian schemes that invariably end badly. Where the traditionalist decries intellectuals’ corrosion of the organic social fabric, the epistemic skeptic is more likely to be exercised by disruption of the signals that mediate voluntary economic exchanges. This position is often associated with Friedrich Hayek; one of its more notable exponents in the U.S. is Thomas Sowell, who has written critically about the role of intellectuals in society.
From Eric Raymond
I’ve about given up on LW, more than half the people here, judging from surveys, believe in socialism, or the socialism lite of modern liberalism, a belief system on a par with Creationism. Economics may not be as scientific as biology, but it is the most reliable of the social sciences, and economic socialism denies economics exactly as Creationism denies biology.
Economic libertarianism is how things actually work; socialism, of all styles and degrees, is to economics as Creationism is to biology. It is a politcal attempt to make the real world conform to wishful thinking. Political libertarianism is the refusal to condone that attempt to evade reality. Also the recognition that other forms of freedom are also as important in other areas of human relations, even if they are not as easily quantifiable as economics.
Libertarianism in the real world is far from perfect, of course. One failure of libertarianism is to clearly define fundamental versus derived effects and their importances. The “market worshiping” libertarians celebrate any effect caused by a free market whether it is good or not. The problem is that most of what they notice are derivative effects, what the market makes available. The fundamental benefit of free markets, though, is in the freedom granted creators, without which hardly any of the goods would be available in the first place. A key document describing, and celebrating, the “market worship” perversion is Virginia Postrel’s The Substance of Style: How the Rise of Aesthetic Value Is Remaking Commerce, Culture, and Consciousness. I once, in my pre-Internet days, started an essay in response, “Why Style Lacks Substance, or The Value of Free Markets is in Opportunity it Provides, not in What it Rewards.”
Another libertarian perversion is the “libertinist” position, they can usually be recognized by the outsized emphasis they place on recreational drugs, pornography, and entertainment. Not that these should be controlled, but they are definitely secondary, in the real world, to production and distribution.
“Politics is the mindkiller” is an irrational mantra from those attempting to defend their irrational beliefs. Intelligence far too often simply makes it easier for people to rationalize whatever they want to believe in.
It’s that particular kinds of brain damage can take away particular mental abilities, and there’s a consistent correlation between the damage to the brain and the damage to the mind.
And particular damage to a radio receiver distorts the received signal in particular ways. So that argument isn’t much help.
OTOH, there are downsides to being too secure: you’re less likely to be kidnapped, but it’s likely to be worse if you ARE.
Indeed, for a recent, real world example, the improvement in systems to make cars harder to steal led directly to the rise of carjacking in the 1990s.
If you read the second sentence, I do too; it’s just a very weak disadvantage when compared to almost any suffering. If I didn’t consider it at least somewhat disadvantageous, I wouldn’t be around now to write about it.
I think a working AGI is more likely to result from expanding or generalizing from a working driverless car than from an academic program somewhere. A program to improve the “judgement” of a working narrow AI strikes me as a much more plausible route to GAI.