Market estimates will converge to the most profitable P(X if A), the one that wins bets vs other versions. And that is the version you want to use when you make decisions.
RobinHanson
“if a investor doesn’t review a proposal, we assume that they are submitting an unconditional sell bid.” Of ALL of their shares, at any price? Seems a way to force a sale at a low price.
Also call markets don’t aggregate info as well as continuous double auctions, and you aren’t offering any incentives to find and add info.
Metaculus wouldn’t work if it didn’t offer incentives for participants. The fact that they aren’t monetary doesn’t mean they won’t induce the same sort of problems you worry about above.
Surely we should compare, for particular topics, the magnitude of actual sabotage to the magnitude of the info value gained. And there are many ways to design markets to reduce the rate of sabotage.
Seems to me I spent a big % of my post arguing against the rapid growth claim.
Come on, most every business tracks revenue in great detail. If customers were getting unhappy with the firm’s services and rapidly switching en mass, the firm would quickly become very aware, and looking into the problem in great detail.
You complain that my estimating rates from historical trends is arbitrary, but you offer no other basis for estimating such rates. You only appeal to uncertainty. But there are several other assumptions required for this doomsday scenario. If all you have is logical possibility to argue for piling on several a priori unlikely assumptions, it gets hard to take that seriously.
You keep invoking the scenario of a single dominant AI that is extremely intelligent. But that only happens AFTER a single AI fooms to be much better than all other AIs. You can’t invoke its super intelligence to explain why its owners fail to notice and control its early growth.
I comment on this paper here: https://www.overcomingbias.com/2022/07/cooks-critique-of-our-earliness-argument.html
That’s an exponential with mean 0.7, or mean 1⁄0.7?
“My prior on is distributed ”
I don’t understand this notation. It reads to me like “103+ 5 Gy”; how is that a distribution?
It seems the key feature of this remaining story is the “coalition of AIs” part. I can believe that AIs would get powerful, what I’m skeptical about is the claim that they naturally form a coalition of them against us. Which is also what I object to in your prior comments. Horses are terrible at coordination compared to humans, and humans weren’t built by horses and integrated into a horse society, with each human originally in the service of a particular horse.
Its not enough that AI might appear in a few decades, you also need something useful you can do about it now, compared to investing your money to have more to spend later when concrete problems appear.
I just read through your “what 2026 looks like” post, but didn’t see how it is a problematic scenario. Why should we want to work ahead of time to prepare for that scenario?
In our simulations, we find it overwhelmingly likely that any such spherical volume of an alien civ would be much larger than the full moon in the sky. So no need to study distant galaxies in fine detail; look for huge spheres in the sky.
“or more likely we are an early civilization in the universe (according to Robin Hanson’s “Grabby Aliens” model) so, 2) quite possibly there are no grabby aliens populating the universe with S-Risks yet”
But our model implies that there are in fact many aliens out there right now. Just not in our backward light cone.
Aw, I still don’t know which face goes with the TGGP name.
Wow, it seems that EVERYONE here has this counter argument “You say humans look weird according to this calculation, but here are other ways we are weird that you don’t explain.” But there is NO WAY to explain all ways we are weird, because we are in fact weird in some ways. For each way that we are weird, we should be looking for some other way to see the situation that makes us look less weird. But there is no guarantee of finding that; we just just actually be weird. https://www.overcomingbias.com/2021/07/why-are-we-weird.html
You have the date of the great filter paper wrong; it was 1998, not 1996.
You say that markets give evidential conditionals while decisions want causal conditionals. For this comment, I’m not taking a position on which conditional we want for decisions. I’m just saying that both trades and the decision advised should use the same conditional, but I’m not saying which one that is.