The participants don’t know the rules, and have been given a hint that they don’t know the rules—as the host said that the choices will be independent/hidden, but then is telling you the other contestant’s choice. So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.
PeterisP
Sorry for intruding on an very old post, but checking ‘peoplerandom’ integers modulo 2 is worse than flipping a coin—when asked for a random number, people tend to choose odd numbers more often than even numbers, and prime numbers more often than non-prime numbers.
Which are the useful areas of AI study?
Income question needs to be explicit about if it’s pre-tax or post-tax, since it’s a huge difference, and the “default measurement” differs between cultures, in some places “I earn X” means pre-tax and in some places it means post-tax.
My [unverified] intuition on AI properties is that the delta between current status and ‘IQ60AI’ is multiple orders of magnitude larger than the delta between ‘IQ60AI’ and ‘IQ180AI’. In essence, there is not that much “mental horsepower” difference between the stereotypical Einstein and a below-average person; it doesn’t require a much larger brain or completely different neuronal wiring or a million years of evolutionary tuning.
We don’t know how to get to IQ60AI; but getting from IQ60AI to IQ180AI could (IMHO) be done with currently known methods in many labs around the world by the current (non IQ180) researchers rapidly (ballpark of 6 months maybe?). We know from history that a 0 IQ process can optimize from monkey-level intelligence to an Einstein by bruteforcing; So in essence, if you’ve got IQ70 minds that can be rapidly run and simulted, then just apply more hardware (for more time-compression) and optimization, as that gap seems to require exactly 0 significant breakthroughs to get to IQ180.
I see MOOC’s as a big educaational improvement because of this—sure, I could get the same educational info without the MOOC structure; just by reading the field best textbooks and academic papers; but having a specific “course” with the quizzes/homework makes me actually do the excercises, which I wouldn’t have done otherwise; and the course schedule forces me to do them now, instead of postponing them for weeks/months/forever.
I’ve worn full-weight chain and plate reconstruction items while running around for a full day, and I’m not physically fit at all—I’d say that a random geeky 12 year old boy would be easily able to wear an armor suit, the main wizard-combat problems being getting winded very, very quickly if running (so they couldn’t rush in the same way as Draco’s troops did), and slightly slowed down arm movement, which might hinder combat spellcasting. It is not said how long the battles are—if they are less than an hour, then there shouldn’t be any serious hindrances; if longer then the boys would probably want to sit down and rest occasionally or use some magic to lighten the load.
The difference is that there are many actions that help other people but don’t give an appropriate altruistic high (because your brain doesn’t see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.
The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.
I’m going to go out and state that the chosen example of “middle school students should wear uniforms” fails the prerequisite of “Confidence in the existence of objective truth”, as do many (most?) “should” statements.
I strongly believe that there is no objectively true answer to the question “middle school students should wear uniforms”, as the truth of that statement depends mostly not on the understanding of the world or the opinion about student uniforms, but on the interpretation of what the “should” means.
For example, “A policy requiring middle school students to wear uniforms is beneficial to the students” is a valid topic of discussion that can uncover some truth, and “A policy requiring middle school students to wear uniforms is mostly beneficial to [my definition of] society” is a completely different topic of discussion that likely can result in a different or even opposite answer.
Talking about unqualified “should” statements are a common trap that prevents reaching a common understanding and exploring the truth. At the very least, you should clearly distinguish between “should” as good, informed advice from “should” as a categorical moral imperative. If you want to discuss if “X should to Y” in the sense of discussing what are the advantages of doing Y (or not), then you should (see what I’m doing here?) convert them to statements in the form “X should do Y because that’s a dominant/better/optimal choice that benefits them”, because otherwise you won’t get what you want but just an argument between a camp arguing this question versus a camp arguing about why we should/shouldn’t force X to do Y because everyone else wants it.
Dolphins are able to herd schools of fish, cooperating to keep a ‘ball’ of fish together for a long time while feeding from it.
However, taming and sustained breeding is a long way from herding behavior—it requires long term planning for multi-year time periods, and I’m not sure if that has been observed in dolphins.
Well, I fail to see any need for backward-in-time causation to get the prediction right 100 out of 100 times.
As far as I understand, similar experiments have been performed in practice and homo sapiens are quite split in two groups ‘one-boxers’ and ‘two-boxers’ who generally have strong preferences towards one or other due to whatever differences in their education, logic experience, genetics, reasoning style or whatever factors that are somewhat stable specific to that individual.
Having perfect predictive power (or even the possibility of it existing) is implied and suggested, but it’s not really given, it’s not really necessary, and IMHO it’s not possible and not useful to use this ‘perfect predictive power’ in any reasoning here.
From the given data in the situation (100 out of 100 that you saw), you know that Omega is a super-intelligent sorter who somehow manages to achieve 99.5% or better accuracy in sorting people into one-boxers and two-boxers.
This accuracy seems also higher than the accuracy of most (all?) people in self-evaluation, i.e., as in many other decision scenarios, there is a significant difference in what people believe they would decide in situation X, and what they actually decide if it happens. [citation might be needed, but I don’t have one at the moment, I do recall reading papers about such experiments]. The ‘everybody is a perfect logician/rationalist and behaves as such’ assumption often doesn’t hold up in real life even for self-described perfect rationalists who make strong conscious effort to do so.
In effect, data suggests that probably Omega knows your traits and decision chances (taking into account you taking into account all this) better than you do—it’s simply smarter than homo sapiens. Assuming that this is really so, it’s better for you to choose option B. Assuming that this is not so, and you believe that you can out-analyze Omega’s perception of yourself, then you should choose the opposite of whatever Omega would think of you (gaining 1.000.000 instead of 1.000 or 1.001.000 instead of 1.000.000). If you don’t know what Omega knows about you—then you don’t get this bonus.
A RSS feed for new posts is highly desirable—I don’t generally go to websites “polling” for new information that may or may not be there unless e.g. I’m returning to a discussion that I had yesterday, so a “push mechanism” e.g. RSS is essential to me.
A gold-ingot-manufacturing-maximizer can easily manufacture more gold than exists in their star system by using arbitrary amounts of energy to create gold, starting with simple nuclear reactions to transmute bismuth or lead into gold and ending with direct energy to matter to gold ingots process.
Furthermore, if you plan to send copies-of-you to N other systems to manufacture gold ingots there, as long as there is free energy, you can send N+1 copies-of-you. A gold ingot manufacturing rate that grows proportionally to time^(n+1) is much faster than time^n, so sending N copies wouldn’t be maximizing.
And a third point is that if it’s possible that somewhere in the universe there are some ugly bags of mostly water that prefer to use their atoms and energy for not manufacturing gold ingots but their survival; then it’s very important to ensure that they don’t grow strong enough to prevent you from maximizing gold ingot manufacturing. Speed is of the essence, you must reach them before it’s too late, or gold ingot manufacture won’t get maximized.
Actually, how should one measure own IQ ? I wouldn’t know a reasonable place where to start looking for it, as the internet is full of advertising for IQ measurements, i.e., lots of intentional misinformation. Especially avoiding anything restricted to a single location like USA—this makes SAT’s useless, well, at least for me.
″ How is it that the AGI is yet smart enough to learn this all by itself but fails to notice that there are rules to follow”—because there is no reason for an AGI automagically creating arbitrary restrictions if they aren’t part of the goal or superior to the goal. For example, I’m quite sure that F1 rules prohibit interfering with drivers during the game; but if somehow a silicon-reaction-speed AGI can’t win F1 by default, then it may find it simpler/quicker to harm the opponents in one of the infinity ways that the F1 rules don’t cover—say, getting some funds in financial arbitrage, buying out the other teams, and firing any good drivers or engineering a virus that halves the reaction speed of all homo-sapiens—and then it would be happy as the goal is achieved within the rules.
Newcomb’s problem doesn’t specify how Omega chooses the ‘customers’. It’s a quite realistic possibility that it simply has not offered the choice to anyone that would use a randomizer, and cherrypicked only the people which have at least 99.9% ‘prediction strength’.
“Are we political allies, or enemies?” is rather orthogonal to that—your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.
For example, a powerful and popular extreme radical member of the “opposite” camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate—that’s often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusing on your important issues instead of something else. The existance of such a pundit is important to you, you want them to keep doing what their do and have their propaganda actions be successful up to a point. I won’t go into examples of particular politicians/parties of various countries, that gets dirty quickly, but many strictly opposed radical groups are actually allies in this sense against the majority of moderates; and sometimes they actively coordinate and cooperate despite the ideological differences.
On the other hand, a public speaker that targets the same audience as you do, shares the same goals/conslusions that you do, and the intended methods to achieve it, but simply does it consistently poorly—by using sloppy arguments that alienate part of the target audience, or by disgusting personal behavior that hurts the image of your organization. That’s a good example of a political enemy, one that you must work to silence, to get them ignored and not heard; despite being “aligned” with your conclusions.
And of course, a political competitor that does everything that you want to do but holds a chair/position that you want for yourself, is also a political enemy. Infighting inside powerful political groups is a normal situation, and when (and if) it goes to public, very interesting political arguments appear to distinguish one from their political enemy despite sharing most of the platform.
Why not?
Of course, the best proportion would be 100% of people telling me that p(the_warming)=85%; but if we limit the outside opinions to simple yes/no statements, then having 85% telling ‘yes’ and 15% telling ‘no’ seems to be far more informative than 100% of people telling ‘yes’ - as that would lead me to very wrongly assume that p(the_warming) is the same as p(2+2=4).
To put it in very simple terms—if you’re interested in training AI according to technique X because you think that X is the best way, then you design or adapt the AI structure so that technique X is applicable. Saying ‘some AI’s may not respond to X’ is moot, unless you’re talking about trying to influence (hack?) AI designed and controlled by someone else.
There’s the classic economic textbook example of two hot-dog vendors on a beach that need to choose their location—assuming an even distribution of customers, and that customers always choose the closest vendor; the equilibrium location is them standing right next to each other in the middle; while the “optimal” (from customer view, minimizing distance) locations would be at 25% and 75% marks.
This matches the median voter principle—the optimal behavior of candidates is to be as close as possible to the median but on the “right side” to capture “their half” of the voters; even if most voters in a specific party would prefer their candidate to cater for, say, the median Republican/Democrat instead, it’s against the candidates interests to do so.