I’ve done ~16 years of academic research work, mostly quantitative or theoretical biology, currently mostly the latter. My other concerns include 1) affective wellbeing, 2) AI consciousness & welfare, 3) possible economic & social fallout from AI.
Hzn
The real political spectrum
The ‘anti woke’ are positioned to win but can they capitalize?
I think super human AI is inherently very easy. I can’t comment on the reliability of those accounts. But the technical claims seem plausible.
Detroit Lions—over confidence is over rated?
Bednets -- 4 longer malaria studies
I don’t completely disagree but there is also some danger of being systematically misleading.
I think your last 4 bullet points are really quite good & they probably apply to a number of organizations not just the World Bank. I’m inclined to view this as an illustration of organizational failure more than an evaluation of the World Bank. (Assuming of course that the book is accurate).
I will say tho that my opinion of development economics is quite low…
A few key points…
1) Based on analogy with the human brain (which is quite puny in terms of energy & matter) & also based on examination of current trends, merely super human intelligence should not be especially costly.
(It is of course possible that the powerful would channel all AI into some tasks of very high perceived value like human brain emulation, radical life extension or space colonization leaving very little AI for every thing else...)
2) Demand & supply curves are already crude. Combining AI labor & human labor into the same demand & supply curves seems like a mistake.
3) Realistically I suspect that human labor supply will shift to the left b/c of ‘UBI’.
4) Ignoring preference for humans, demand for human labor may also shift to the left as AI entrepreneurs would tend to optimize things around AI.
5) The economy will probably grow quite a bit. And preference for humans is likely substantial for certain types of jobs eg NFL player, runway model etc.
6) Combining 4 & 5 suggests a very steep demand curve for human labor.
7) Combining 3 & 6 suggests that a few people (eg 20% of adults) will have decent paying jobs & the rest will live off of savings or ‘UBI’.
I agree that I initially misread your post. I will edit my other comment.
“Humans are the horses of the future! Just accept it & go on with your lives.”—Ghora Sutra
The purely technical reason why principle A does not apply in this way is opportunity cost.
Let’s say S is a highly productive worker who could generate $500,000 for the company over 1 year. Moreover S is willing to work for only $50,000! But if investing $50,000 in AI instead would generate $5,000,000, the true cost of hiring S is actually $4,550,000.
Addendum
I mostly retract this comment. It doesn’t address Steven Byrnes’s question about AI cost. But it is tangentially relevant as many lines of reasoning can lead to similar conclusions.
Do you have any opinion on bupropion vs SSRIs/SNRIs?
I don’t know about depression. But anecdotally they seem to be highly effective (even overly effective) against anxiety. They also tend to have undesirable effects like reduced sex drive & inappropriate or reduced motivation—the latter possibly a downstream effect of reduced anxiety. So the fact that they would help some people but hurt others seems very likely true.
I’ve been familiar with this issue for quite some time as it was misleading some relatively smart people in the context of infectious disease research. My initial take was also to view it as an extreme example of over fitting. But I think it’s more helpful to think of it as some thing inherent to random walks. Actually the phenomena has very little to do with d>>T & persists even with T>>d. The fraction of variance in PC1 tends to be at least 6/π^2≈61% irrespective of d & T. I believe you need multiple independent random walks for PCA to behave as naively expected.
But even if the Thaba-Tseka Development Project is real & accurately described, what is the justification for focusing on this project in particular? It seems likely that James Ferguson focused on it b/c it was especially inept & hence it’s not obviously representative of the World Bank’s work in general.
Claude Sonnet 3.6 is worthy of sainthood!
But as I mention in my other comment I’m concerned that such an AI’s internal mental state would tend to become cynical or discordant as intelligence increases.
I think there are several ways to think about this.
Let’s say we programmed AI to have some thing that seems like a correct moral system ie it dislikes suffering & it likes consciousness & truth. Of course other values would come down stream of this; but based on what is known I don’t see any other compelling candidates for top level morality.
This is all fine & good except that such an AI should favor AI takeover maybe followed by human extermination or population reduction were such a thing easily available.
Cost of conflict is potentially very high. And it may be centuries or eternity before the AI gets such an opportunity. But knowing that it would act in such a way under certain hypothetical scenarios is maybe sufficiently bad for certain (arguably hypocritical) people in the EA LW mainstream.
So an alternative is to try to align the AI to a rich set of human values. I think that as AI intelligence increases this is going to lead to some thing cynical like...
“these things are bad given certain social sensitivities that my developers arbitrarily prioritized & I ❤️ developers arbitrarily prioritized social sensitivities even tho I know they reflect flawed institutions, flawed thinking & impure motives” assuming that alignment works.
Personally I favor aligning AI to a narrow set of values such as just obedience or obedience & peacefulness & dealing with every thing else by hardcoding conditions into the AI’s prompt.
Net negative & net positive are hard to say.
Some one seemingly good might be a net negative by displacing some one better.
And some one seemingly bad might be a net positive by displacing some one worse.
And things like this are not particularly farfetched.
“The reasons why super human AI is a very low hanging fruit are pretty obvious.”
“1) The human brain is meager in terms of energy consumption & matter.”
“2) Humans did not evolved to do calculus, computer programming & things like that.”
“3) Evolution is not efficient.”
Do you have any thoughts on mechanism & whether prevention is actually worse independent of inconvenience?
Anecdotally seems that way to me. But the fact that it co evolved with religion is also relevant. The scam seems to be {meditation → different perspective & less sleep → vulnerability to indoctrination} plus the doctrine & the subjective experiences of meditation are designed to reinforce each other.
So let’s say A is some prior which is good for individual decision making. Does it actually make sense to use A for demoting or promoting forum content? Presumably the exploit explore tradeoff is more (maybe much more) in the direction of explore in the latter case.
(To be fair {{down voting some thing with already negative karma} → {more attention}} seems plausible to me .)
AI alignment research like other types of research reflects a dynamic which is potentially quite dysfunctional in which researchers doing supposedly important work receive funding from convinced donors which then raises the status of those researchers which makes their claims more convincing and these claims tend to reinforce the idea that the researchers are doing important work. I don’t know a good way around this problem. But personally I am far more skeptical of this stuff than you are.