Done, without finger question.
Lalartu
Well, why do you think socialism is so horribly wrong? During the 20th century socialists more or less won and got what they wanted. Things like social security, govermental control over business and redistribution of wealth in general are all socialist. This all may be bad from some point of view, but it is in no way mainstream opinion.
Then, those guys whom you mention in your article called themselves communists and marxists. At most, they considered socialism as some intermediate stage for building communism. And communism went bad because it was founded on wrong assumptions about how both economy and human psychology work. So, which MIRI/Lesswrong assumptions can be wrong and cause a lot of harm? Well, here are some examples.
1) Building FAI is possible, and there is a reliable way to tell if it is truly FAI before launching it. Result if wrong: paperclips.
2) Building FAI is much more difficult than AI. Launching a random AI is civilization-level suicide. Result if this idea becomes widespread: we don’t launch any AI before civilization runs out of resources or collapses for some other reason.
3) Consciousness is sort of optional feature, intelligence can work just well without it. We can reliably say if given intelligence is a person. In other words, real world works the same way as in Peter Watts “Blindsight”. Results if wrong: many, among them classic sci-fi AI rebellion.
4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.
He certainly has a point here: imagine society without toilets or youtube, which would be most tolerable (or most >survivable)?
Village dwellers (at least here in Ukraine) don’t have toilets and don’t lack them much. It is only important in cities.
What current needs do we have that we’re waiting for innovation to solve?
Lots of. As example, the most mundane of the important problems: housing is absurdly expensive.
Even if we had teleporters, would future Tyler Cowens be writing that they’re not as innovative as the car—and would they >be correct, in that a teleporter is just a more efficient way of solving a problem that cars and airplanes had already >partially solved?
No, they would be wrong. Teleporters would be indeed transformative technology. Among many other changes, they mean that “place where you live” and “place where you work” are not connected at all, at least within national borders.
I don’t see any conceivable realistic technological innovation that would be as transformative as the flush toilet, >vaccinations, birth control, telephones, cars and airplanes.
I think there were a lot of predictions of this kind in the past.
On the topic of house cleaning, I think that a lot of people (including myself) just really don’t want to see strangers in their homes. If rich people generally don’t have problems with that means there are some psychological differences. When reading historical fiction the fact that even middle-class families had servants looks really weird.
Historian David Wootton argues that until mid-19th century and the discovery of germ theory physicians did more >harm than good to their patients. Nowadays most people expect positive results when they go to the doctor.
This raises two questions:
1) Why, despite this, doctor was in general respected and well-paid profession?
2) What would have happened if use of statistics in medicine became widespread before germ theory. Could it lead to ban on medicine?
Point is, most likely there aren’t any advanced (that is, starfaring, dysonspherebuilding and so on) civilizations at all.
That is a collection of rather common ideas from 1960-era futurism, which were considered, calculated , sometimes prototyped and found massively impractical since then.
I think best summary on technological predictions is Your Flying Car Awaits by Paul Milo. In short: most predictions are wrong, no matter who did them and in what technical field. Overoptimistic are more common than too conservative. Also, predictions made in late 19th—early 20th centuries are vastly more correct then those from 1950 − 1970.
As for predicting things like consequences and commercial sence (given that tech is feasible), problem is that they are dependent on a lot of exact implementation details and outside factors.
One good example is airships. The idea was first mentioned in late 17th century, the first prototype flew in mid-19th and mass-production started just before the WW1. During that time lots of different authors did lots of predictions how airships will be used and change the world. They all were totally wrong (except maybe Jules Verne). Airships cannot capture or destroy cities, cannot sink navies, they are not practical for carrying paratroopers and useless as fighters. In civilian use airship is just flying catastrophe, about 1000 rate more dangerous than same tech level airplane, expensive and ineffective. This all comes from details like airframe strenth and wind drag which are hard to predict. The same thing, just lesser in magnitude happened with civilian nuclear ships and supersonic airliners.
Also, there is no good way to predict political and legislative changes. It could happen that for example medicine wasn’t so regulated but Internet was banned from the very beginning.
RLHF is a trial-and-error approach. For superhuman AGI, that amounts to letting it kill everybody, and then telling that this is bad, don’t do it again.
In general, lessons from the Russo-Ukrainian war are not very relevant for a “state of the art” conflict, because both sides have weak air forces. It is like watching two armies fighting with bayonets because they are out of ammo and concluding that you should arm your soldiers with swords and shields.
Also, this makes many assumptions which are dubious (like, sniper drones aren’t anywhere close to practical use, and it is not clear if they are viable), but also some which are strictly false:
Bullets can’t carry enough chaff to “surround” a tank
Lasers can destroy artillery shells (which are made of steel) in flight, there is no practical way to harden a light drone against them.
That story of Mongol conquests is simply not true. Horse archer steppe nomads existed for many centuries before Mongols, and often tried to conquer their neighbors, with mixed success. What happened in the 1200s is that Mongols had a few exceptionally good leaders. After they died, Mongols lost their advantage.
Calling states like Khwarazmian or Jin Empire “small duchies” is especially funny.
I think the defining feature of “weak pivotal act” idea was that it should be safe due to its weakness. So, any pivotal act that depends on aligned AGI (and would fail catastrophically if it is not aligned) is not weak.
Ridley suggested in Rational Optimist that other apes lack the instinct to trade even when we teach them language
He is just plain wrong
http://www.eva.mpg.de/psycho/pdf/Publications_2009_PDF/Pele_Call_2009.pdf
No they cant. For example to make copper you need copper mine workers, smeltery workers, woodcutters, charcoal burners, wagon drivers to transport wood, ore and coal, carpenters to make wagons, builders to build mine and smeltery and farmers to feed them. That is impossible for population less then few thousands at least. Industry nesessary to make a generator requires population in millions.
Futurists learn nothing from their mistakes. Predicting “human-level AI” from scratch makes just as much sense as predicting humanoid robot servants before PC. To get somewhat more grounded forecast, ask more specific question—namely, when computers will become better AI developers than humans.
Yes. Effect from convincing people that cryonics is socially acceptable far outweights lower success estimates.
Some historians claim that pre-industrial workers had much shorter working hours then those common in the 19 century. If that is true, then 8-hour workday is more a return to the historical norm rather than “progress” strictly speaking. Maybe that’s why there was widespread support for it, and there isn’t much for 4-hour workday now.
Well, yes membership in LW community make one more likely to subscribe for cryonics, even if corrected for selection. Because LW community promotes cryonics. Yes it is that simple. It is basic human behaviour and doesn’t much to do with rationality. Remove all positive portrayal, all emotions, all that “value life” and “true rationalist”, leaving only cold facts and numbers—and few years later cryonics subscription rate among new LW members will drop much more close to average among “people who know about cryonics” group.
80% for AGI solving aging is very optimistic. Even just one single possibility, that people who decide what values should AGI have happen to be anti-immortalist is imo >20%.
Irrationality game:
Most posthuman societies will have violent death rate much higher than humans ever had. Most poshumans who will ever live will die at wars. 95%