That’s a misleading way to put it. If they’re right 75% of the time, they’ll think their probability of being right is over 90%. Trying to convince people that it’s 75% in practice will make them uncomfortable, so it might be better to choose examples where the outcomes aren’t important.
pcm
I doubt that developing country stocks will outperform others.
One important reason that high growth doesn’t help investors much is that companies in high growth countries tend to issue more new shares in order to finance the growth.
I prefer to invest in countries which are rated as having low corruption.
A basic principle in all investing is higher risk equates with higher return.
This rule is usually false. Low beta stocks tend to perform as well as or better than high beta stocks.
(See the book Finding Alpha by Eric Falkenstein for evidence).
cities and industry proved much more resilient to bombing than anyone had a right to suspect.
What information was unavailable about the damage that would be caused by a given amount of bombing?
My guess is that people overreacted to the complacency that led to WW1, and thought it safer to overstate the harm done by war in order to motivate efforts to avoid it.
This page has links to 3 of Drexler’s designs with pdb files. Can you simulate those?
Building those would require tools that are quite different from what we have now.
Magical thinking? I intended to mainly express uncertainty about it.
I don’t expect appeals to authority to accomplish much here. Maybe it was a mistake for me to mention it at all, but I’m concerned that people here might treat Eliezer as more of an authority on MNT than he deserves. I only claimed to have more authority about MNT than Eliezer. That doesn’t imply much—I’m trying to encourage more doubt about how an AI could take over the world.
Has Drexler said anything which implies that step 4 would succeed without lots of trial and error?
Natural selection used trial and error. An AI would do that faster and with fewer errors.
There are some problems for which knowledge of the problem plus knowledge of computation is sufficient to estimate a minimum amount of computation needed. Are you claiming to know that MNT isn’t like that? Or that an AI can create powerful enough computers that that’s irrelevant?
Appeals to authority about AI seem unimpressive, since nobody has demonstrated expertise at creating superhuman AI.
Anything that makes the Schrodinger equation tractable would make me much less confident of my analysis.
Drexler gets the physics right. It’s harder to evaluate the engineering effort needed. Eliezer’s claims about how easy it would be for an FAI to build MNT go well beyond what Drexler has claimed.
I’m fairly sure I know more about MNT than Eliezer (I tried to make a career of it around 1997-2003), and I’m convinced it would take an FAI longer than Eliezer expects unless the FAI has very powerful quantum computers.
This somewhat controversial paper estimates a net 849252 fewer deaths in 2050 due to warming from 6 disease types that they studied.
I envision markets generating rules, but not making all decisions. I don’t see any indication that futarchy would take much discretion away from the people who currently make military decisions.
I question whether it’s wise to make new rules during a financial crisis. I predict that a mature futarchy will set rules in advance of a crisis that will better deal with it than rules made during the crisis would. (In an immature futarchy, markets will influence deciders somewhat like how polls influence them now).
There is speculation that brain size decreased due to loss of olfactory and maybe other sensory parts of the brain after dogs took over those functions. See here.
China is experiencing very fast knowledge-driven growth as it catches up to already-produced knowledge that it can cheaply import.
To the extent that AIs other than the most advanced project can generate self-improvements at all, they generate modifications of idiosyncratic code that can’t be cheaply shared with any other AIs.
I say it’s at least as expensive for China to import knowledge. A fair amount is trade secrets that are more carefully guarded than AI content. China copies on the order of $1 trillion in value. What’s the value of uncopied AI content?
We don’t invest in larger human brains because that’s impossible with current technology
No, we have technology for that (selective breeding, maybe genetic engineering). The return on investment is terrible. In an em dominated world, the technology for building larger minds (and better designed minds) may still be poor compared with the technology for copying. How much will that change with AGI? I expect people to disagree due to differing intuitions about how AGI will work.
Designed to grow fast is hard to observe. The supply of companies appearing to fit that description increases to satisfy VC demand. The money in VC funds exceeds what the few VCs who are able to recognize good startups are able to usefully invest.
it looks like there is a big premium on risk
Eric Falkenstein presents some strong evidence against this in his book Finding Alpha. Low risk equities outperform high risk equities. The difference between equity and bond returns probably reflects something other than risk.
He also claims that private equity doesn’t outperform publicly traded equity (suggesting that startups aren’t a good investment, although “startup” doesn’t seem to be a well defined category).
That still leaves an interesting question about whether it’s wise to increase risk via leverage.
There’s a lot of variation in how aware people are of their emotions.
You might want to look into Alexithymia.
Financial markets typically exhibit leptokurtosis, meaning that rare large declines influence the expected value more than a lognormal distribution predicts. A few years of data are often inadequate to measure that.