The discovery that the universe has no purpose need not prevent a human being from having one.
-Irwin Edman
The discovery that the universe has no purpose need not prevent a human being from having one.
-Irwin Edman
(my default stance is that all teachers should have concealed carry permits and mandatory shooting range time requirements)
Let’s assume that your suggested policy would bring school shootings from about the rate they’re at now to 0. I can’t imagine the benefit would be much better than that, and it would probably be a lot worse. Wikipedia says that there have been 38 school shooting deaths this year (not including the suicides, and including the recent attack, making it much higher than other recent years). According to this, there are about 3 million public school teachers in the US and they make about $50,000 per year each, so their value of time is probably somewhere around $30/hour, so it would cost about $100 million per year to require all of them to spend an hour per year on the shooting range. If that saves about 40 lives per year, that works out to $25 million per life (Edit: oops, no it doesn’t). None of the estimates on wikipedia suggest that lives should be valued at more than $10 million per life. And I haven’t even mentioned the costs of equipping the teachers with guns, so the actual cost of the policy is probably much higher. So mandatory firing range time for all teachers is a bad policy under the most ridiculously pro-gun assumptions I could come up with.
This sort of thing seems to suggest that EY’s claims in this post about the scale of the relative intelligence differences between chimps, a village idiot, and Einstein is incorrect. The difference in intelligence between village idiot and Einstein may be comparable to the difference in intelligence between some nonhuman animals and a human village idiot. Which is a priori surprising, given that human brains are very structurally similar to each other in comparison to nonhuman animal brains.
is the L2 norm preferred b/c it’s the only norm that’s invariant under orthonormal change of basis, or is the whole idea of orthonormality somehow baking in the fact that we’re going to square and sqrt everything in sight (and if so how)
The L2 norm is the only Lp norm that can be preserved by any non-trivial change of basis (the trivial ones: permuting basis elements and multiplying some of them by −1). This follows from the fact that, for p2, the basis elements are their negatives can be identified just from the Lp norm and the addition and scalar multiplication operations of the vector space. To intuitively gesture at why this is so, let’s look at L1 and L.
In L1, the norm of the sum of two vectors is the sum of their norms iff for each coordinate, both vectors have components of the same sign; otherwise, they cancel in some coordinate, and the norm of the sum is smaller than the sum of the norms. 0 counts as the same sign as everything, so the more zeros a vector has in its coordinates, the more other vectors it will have the maximum possible norm of sum with. The basis vectors and their negations are thus distinguished as those unit vectors u for which the set {v : |u+v| = |u|+|v|} is maximal. Since the alternative to |u+v| = |u|+|v| is |u+v| < |u|+|v|, the basis vectors can be thought of as having maximal tendency for their sums with other vectors to have large norm.
In L, on the other hand, as long as you’re keeping the largest coordinate fixed, changing the other coordinates costs nothing in terms of the norm of the vector, but making those other coordinates larger still creates more opportunities to change the norm of other vectors when you add them together. So if you’re looking for a unit vector u that minimizes {v : |u+v| |v|}, u is a basis vector or the negation of one. The basis vectors have minimal tendency for their sums with other vectors to have large norm.
As p increases, the tendency for basis vectors to have large sums with other vectors decreases (as compared to the tendency for arbitrary vectors to have large sums with other vectors). There must be a cross-over point where whether or not a vector is a basis vector ceases to be predictive of the norm of its sum with an arbitrary other vector, and we lose the ability to figure out which vectors are basis vectors only at that point, which is p=2.
So if you’re trying to guess what sort of norm some vector space naturally carries (let’s say you’re given, as a hint, that it’s an Lp norm for some p), L2 should start out as a pretty salient option, along with, and arguably ahead of, L1 and L. As soon as you hear anything about there being multiple different bases that seem to have equal footing (as is saliently the case in QM), that settles it: L2 is the only option.
I find myself thinking “I remember believing X. Why did I believe X? Oh right, because Y and Z. Yes, I was definitely right” with alarming frequency.
I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous
Does anyone actually believe and/or want to defend this? I have a strong intuition that public-facing discussion of AI timelines within the rationalist and AI alignment communities is highly unlikely to have a non-negligible effect on AI timelines, especially in comparison to the potential benefit it could have for the AI alignment community being better able to reason about something very relevant to the problem they are trying to solve. (Ditto for probably most but not all topics regarding AGI that people interested in AI alignment may be tempted to discuss publicly.)
I got into UC Berkeley with a high school GPA of 2.9 by talking about math with professors. This strategy failed everywhere else, and would have failed at Berkeley if I hadn’t been lucky enough to find a professor stubborn enough to argue with the admissions office again after they ignored him the first time. On the other hand, my accomplishments are not even close to as impressive as Andraka’s, so he might have an easier time with this strategy even with a worse GPA.
Anyway, if you’ve done anything impressive, finding a champion within the system is easy. Andraka had a hard time with that step because he was trying to get support before doing something cool rather than after. Now, the vast majority of biology professors would gladly stand up for him to their institution’s admissions department. But this strategy requires persistence on the part of the champion, as well as the applicant.
This sounds like just a special case of the principle that Friendly AI should believe what is true and want what we want, rather than believe what we believe and want what we profess to want.
Discovery is the privilege of the child, the child who has no fear of being once again wrong, of looking like an idiot, of not being serious, of not doing things like everyone else.
Alexander Grothendieck
Nick sacrifices credibility for future claimed precommitments of course.
He sacrifices credibility in future threats against people, but maintains credibility in future promises to act in others’ benefit just as much as if he had decided to steal and then give Abraham half the money. This latter credibility is probably much more useful in most real situations.
Great things tend not to happen on a 6 month time scale. It does not make sense to conclude that ML has slowed down in the last 6 months just because that was the last time machine learning passed a milestone that people who don’t specialize in ML were paying attention to.
From their website, it looks like they’ll be doing a lot of deep learning research and making the results freely available, which doesn’t sound like it would accelerate Friendly AI relative to AI as a whole. I hope they’ve thought this through.
Edit: It continues to look like their strategy might be counterproductive. [Edited again in response to this.]
That’s not surprising.
I’m fairly skeptical whenever someone says “Even though there is no evidence that anyone has ever been able to reliably do , I might be able to do it because I understand ”, including for X = “beating the stock market without inside information” and Y = “heuristics and biases literature”.
Donated $120
I disagree. The LW community already has capable high-status people who many others in the community look up to and listen to suggestions from. It’s not clear to me what the benefit is from picking a single leader. I’m not sure what kinds of coordination problems you had in mind, but I’d expect that most such problems that could be solved by a leader issuing a decree could also be solved by high-status figures coordinating with each other on how to encourage others to coordinate. High-status people and organizations in the LW community communicate with each other a fair amount, so they should be able to do that.
And there are significant costs to picking a leader. It creates a single point of failure, making the leader’s mistakes more costly, and inhibiting innovation in leadership style. It also creates PR problems; in fact, LW already has faced PR problems regarding being an Eliezer Yudkowsky personality cult.
Also, if we were to pick a leader, Peter Thiel strikes me as an exceptionally terrible choice.
If global warming gets worse, but people get enough richer, then they could end up better off. If an unfriendly intelligence explosion occurs, then it kills everyone no matter how well the economy is doing. His argument only applies to risks of marginal harm to the average quality of life, not to risks of humanity getting wiped out entirely.
Would the following be a valid falsehood? “The following program is a really cool video game:
"
I think we have a good contender for the optimal false information here.
I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like “no hypothetical violence” decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.
It makes sense to have mod discretion, but it also makes sense to have a list of rules that the mods can point to so that people whose posts get censored are less likely to feel that they are being personally targeted.
I was there, and I remember closer to a flat distribution between 0 and 5 IQ points. At any rate, I think 4 was a bit on the high side. Also, most people noted that they had a poor idea of how much difference an IQ point makes, and that this made them very uncertain about their answer. Someone suggested that if IQ was measured with a mean of 1000 and standard deviation of 150, people might still be giving answers of about 1 to 5 IQ points (as in, answers that would translate to 0.1 to 0.5 IQ points the way we actually measure them).
I seem to recall that at some point, Quirrell told Harry that his ultimate plan involved Harry leading Britain. Now Harry tells Draco that his ultimate plan involves Draco leading Britain. I can’t wait to see Draco reveal his plan that involves Quirrell leading Britain!