In the broader economy, it’s not the case that “If buying things reduced your income, people stop buying things, and eventually money stops flowing altogether”.
So the only way that makes sense to me is if you model content as a public good which no user is incentivised to contribute to maintaining.
Speculatively, this might be avoided if votes were public: because then voting would be a costly signal of one’s epistemic values or other things.
though I’m not sure how that is calculateed from one’s karma
I believe it’s proportional to the log of your user karma. But I’m not sure.
One can get high karma from a small amount of content that a small number of sufficiently high karma users that double up vote it.
There is still an incentive gradient towards “least publishable units”.
Suppose you have a piece of work worth 18 karma to high-karma user U. However, U’s strong upvote is only worth 8 karma.
If you just post one piece of work, you get 8 karma. If you split your work into three pieces, each of which U values at 6 karma, you’re better off. U might strong-upvote all of them (they’d rather allocate a little too much karma than way too little), and you get 24 karma.
To the extend the metaphor in the original question: maybe if the world economy ran on the equivalent of strong upvotes there would still be cars around, yet no one could buy airplanes.
Do you have details on when and why that was removed? Or past posts discussing that system?
I was going to say 2000 times sounded like way too much, but making the guesstimates that means on average using “common knowledge” once every other day since it was published, and “out to get you” once every third day, and that does seem consistent with my experience hanging out with you (though of course with a fat tail of the distribution, using some concepts like 10 times in a single long hangout).
Asset bubbles can be Nash equilibria for a while. This is a really important point. If surrounded by irrational agents, it might be rational to play along with the bubble instead of shorting and waiting. “The market can stay irrational longer than you can stay solvent.”
For most of 2017, you shouldn’t have shorted crypto, even if you knew it would eventually go down. The rising markets and the interest on your short would kill you. It might take big hedge funds with really deep liquidity to ride out the bubble, and even they might not be able to make it if they get in too early. In 2008 none of the investment banks could short things early enough because no one else was doing it.
The difference between genius (shorting at the peak) and really smart (shorting pre-peak) matters a lot in markets. (There’s this scene in the Big Short where some guy covers the cost of his BBB shorts by buying a ton of AAA-rated stuff, assuming that at least those will keep rising.)
So shorting and buying are not symmetric (as you might treat them in a mathematical model, only differing by the sign on the quantity of assets bought). Shorting is much harder and much more dangerous.
In fact, my current model  is that this is the very reason financial markets can exhibit bubbles of “irrationality” despite all their beautiful properties of self-correction and efficiency.
 For transparency, I basically downloaded this model from davidmanheim.
In case others haven’t seen it, here’s a great little matrix summarising the classification of goods on “rivalry” and “excludability” axes.
Hanson’s speed-weighted voting reminds me a bit of quadratic voting.
I presume that, unlike X-risk, s-risks don’t remove the vast majority of observer moments.
I disagree with the view that it’s bad to spend the first few months prizing top researchers who would have done the work anyway. This _in and of itself_ is cleary burning cash, yet the point is to change incentives over a longer time-frame.
If you think research output is heavy-tailed, what you should expect to observe is something like this happening for a while, until promising tail-end researchers realise there’s a stable stream of value to be had here, and put in the effort required to level up and contribute themselves. It’s not implausible to me that would take a >1 year of prizes.
Expecting lots of important counterfactual work, that beats the current best work, to be come out of the woodwork within ~6 months seems to assume that A) making progress on alignment is quite tractable, and B) the ability to do so is fairly widely distributed across people; both to a seemingly unjustified extent.
I personally think prizes should be announced together with precommitments to keep delivering them for a non-trivial amount of time. I believe this because I think changing incentives involves changing expectations, in a way that changes medium-term planning. I expect people to have qualitatively different thoughts if their S1 reliably believes that fleshing out the-kinds-of-thoughts-that-take-6-months-to-flesh-out will be reward after those 6 months.
That’s expensive, in terms of both money and trust.
elityre has done work on this for BERI, suggesting >30 questions.
Regarding the question metatype, Allan Dafoe has offered a set of desiderata in the appendix to his AI governance research agenda.
If true, sounds like a bug and not a feature of lw.
So: habryka did say “anyone” in the original description, and so he will pay both respondents who completed the bounty according to original specifications (which thereby excludes gjm). I will only pay Radamantis as I interpreted him as “claiming” the task with his original comment.
I suggest you pm with payment details.
I’ll PM habryka about what to do with the bounty given that there were two respondents.
Overall I’m excited this data and analysis was generated and will sit down to take a look and update his weekend. :)
What’s your “reasonable sounding metric” of success?
I add $30 to the bounty.
There are 110 items in the list. So 25% is ~28.
I hereby set the random seed as whatever will be the last digit and first two decimals (3 digits total) of the S&P 500 Index price on January 7, 10am GMT-5, as found in the interactive chart by Googling “s&p500”.
For example, the value of the seed on 10am January 4 was “797”.
[I would have used the NIST public randomness beacon (v2.0) but it appears to be down due to government shutdown :( ].
Instructions for choosing the movements
Let the above-generated seed be n.
indices = sorted(random.sample([i for i in range(1,111)], 28))
I’m confused: why doesn’t variability cause any trouble in the standard models? It seems that if producers are risk-averse, it results in less production than otherwise.
I’m confused. Do you mean “worlds” as in “future trajectories of the world” or as in “subcommunities of AI researchers”? And what’s a concrete example of gains from trade between worlds?
The link to “many rigorous well-controlled studies” is broken.
Suppose your goal is not to maximise an objective, but just to cross some threshold. This is plausibly the situation with existential risk (e.g. “maximise probability of okay outcome”). Then, if you’re above the threshold, you want to minimise variance, whereas if you’re below it, you want to maximise variance. (See this for a simple example of this strategy applied to a game.) If Richard believes we are currently above the x-risk threshold and Ben believes we are below it, this might be a simple crux.
However, I think the distribution of success is often very different from the distribution of impact, because of replacement effects. If Facebook hadn’t become the leading social network, then MySpace would have. If not Google, then Yahoo. If not Newton, then Leibniz (and if Newton, then Leibniz anyway).
I think this is less true for startups than for scientific discoveries, because of bad Nash equilibrium stemming from founder effects. The objective which Google is maximising might not be concave. It might have many peaks, and which you reach might be quite arbitrarily determined. Yet the peaks might have very different consequences when you have a billion users.
For lack of a concrete example… suppose a webapp W uses feature x, and this influences which audience uses the app. Then, once W has scaled and depend on that audience for substantial profit they can’t easily change x. (It might be that changing x to y wouldn’t decrease profit, but just not increase it.) Yet, had they initially used y instead of x, they could have grown just as big, but they would have had a different audience. Moreover, because of network effects and returns to scale, it might not be possible for a rivalling company to build their own webapp which is basically the same thing but with y instead.