I have not read the book, but my memory is that in a blog post he said that the probability is “at least” 10%. I think he holds a much higher number, but doesn’t want to speak about it and just wants to insist that his hostile reader should accept at least 10%. In particular, if people say “no it won’t happen, 10%,” then that’s not a rebuttal at all. But maybe I’m confusing that with other numbers, eg, here where he says that it’s worth talking about even if it is only 1%.
Here he reports old numbers and new:
In Age of Em, I said:Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.I now estimate an unconditional 80% chance of it being a useful guide,
In Age of Em, I said:
Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.
I now estimate an unconditional 80% chance of it being a useful guide,
I think that means he previously put 15% on ems in general and 5% on his em scenario (ie, you were right).
80% on the specific scenario leaves little room for AI, let alone AI destroying all value. So maybe he now puts that <1%. But maybe he has just removed non-em non-AI scenarios. In particular, you have to put a lot of weight on completely unanticipated scenarios; perhaps that has gone from 80% to 10%.
Another example of a dictator driven from power by losing a war is the Greek Junta. They instigated a coup in Cyprus, triggering an invasion by Turkey, and then lost power at home.
But Bruce Bueno de Mosquita claims that dictators are much better at cutting their losses and surviving, whereas democracies double down and escalate to total war.
Does this link say anything about their illegal acquisition of the sources?
It sure looks to me like you and they are lying to distract. I condemn this lying, just as I condemned Christian’s proposed lies.
No, OpenAI is not arguing this. They are not arguing anything, but just hiding their sources. Maybe they’re arguing this about using the public web as training data, but that doesn’t cover pirated books.
Yes, a model is transformative, not infringement. But the question was about the training data. Is that infringement? Distributing the Pile is a tort and probably a crime by quantity. Acquiring the training data was a tort and probably a crime. I’m not sure about possessing it. Even if OpenAI is shielded from criminal responsibility, a crime was necessary for the creation and that was not enough to deter it.
If you want to ban or monopolize such models, push for that directly. Indirectly banning them is evil.
They’re already illegal. GPT-3 is based in large part on what appear to be pirated books. (I wonder if google’s models are covered by its settlements with publishers.)
Could you give an example of this nonsense?
Or just visit to get information. Don’t choose antibiotics vs vitamins based on estimated value delivered, but diversify to learn about them all, to learn what it takes to deliver them. But the most valuable information will probably be unrelated to what you bring.
What is the role of Chat-GPT? Do you see it as progress over GPT-3, or is it just a tool for discovering capabilities that were already available in GPT-3 to good prompt engineers? I see it as the latter and I’m confused by the large numbers of people who seem to be impressed by it as progress. But in your previous post, you mentioned our ignorance of GPT-3, so you seemed to already have large error bars. Is the importance that Chat is revealing those abilities and narrowing the ignorance?
I claim that Hanson has >1% chance of Yudkowsky’s scenario that AI comes first and destroys all value and also a >1% chance that Ems come first and a scenario that a lot of people would say killed all people, including the Ems. This is not directly relevant to the question about AI, but it suggests that he is sanguine about analogous AI scenarios, soft takeoff scenarios not covered by Yudkowsky.
Yes, during the 2 years of wallclock time, the Ems exist for 1000 subjective years. Is that so long? This is not “longtermism.” Yes, you should probably count the Ems as humans, so if they kill all the biological humans, they don’t “kill everyone,” but after this period they are outcompeted by something more alien. Does this count as killing everyone?
Working on capabilities isn’t a problem in his mainline, but the question was not about mainline, but about tail events. If Ems are going to come first, then you could punt alignment to their millennium of work. But if it’s not guaranteed who comes first and AI is worse than Ems, working on AI could cause it to come first. Or maybe not. Maybe one is so much easier than the other and nothing is decision relevant.
Yes, Hanson sees value drift as inevitable. The Ems will be outcompeted by something better adapted that we should see some value in. He thinks it’s parochial to dislike the Ems evolving under Malthusian pressures. Maybe, but it’s important not to confuse the factual questions with the moral questions. “It’s OK because there’s no risk of X” is different from “X is OK, actually.” Yes, he talks about the Dreamtime. Part of that is the delusion that we can steer the future more than Malthusian forces. But part of it is that because we are not yet under strict competition, we have excess resources that we can use to steer the future, if only a little.
Do you mean hard take off, or Yudkowsky’s worry that foom causes rapid value drift and destroys all value? I think Hanson puts maybe 5% on that and a much larger number on hard take off, 10 or 20%.
Strong disagree. Hanson believes that there’s more than a 1% chance of AI destroying all value.
Even if he didn’t see an inside view argument, he makes an outside view argument about the Great Filter.
He probably believes that there’s a much larger chance of it killing everyone, and his important disagreement with Yudkowsky is that thinks that it will have value in itself, rather than be a paperclip maximizer. In particular, in the Em scenario, he argues that property rights will keep humans alive for 2 years. Maybe you should read that as <1% of all humans being killed in that first phase, but at some point the Ems evolve into something truly alien and he stops predicting that they don’t kill everyone. But that’s OK because he values the descendants.
I’ve argued at length (1 2 3 4 5 6 7) against the plausibility of this scenario. Its not that its impossible, or that no one should work on it, but that far too many take it as a default future scenario.
That sounds like a lot more than 1% chance.
Yes, this is a good illustration of you acting just like GPT.
This is a beautiful comment. First it gets the object level answer exactly right. Then it adds an insult to trigger Thomas and get him to gaslight, demonstrate how human the behavior is. Unfortunately, this prevents him from understanding it, so it is of value only to the rest of us.
It was “good morning” that triggered the canned response. It then tried to figure out where to fit bee into it.
What is the goal? Is it to consume a particular resource? Is it to produce a particular product?
Yes, West Texas has abundant light and should have solar panels. Then you can ask what to do with the energy. You could just sell it to the grid. The advent of solar power will mean large daily swings in the price of energy. If you have a use of energy that can run in the mornings, that will benefit from this. Desalination is one such application. Colocating it with the solar plant has some advantage of reducing the negotiation with the grid, but that isn’t theoretically necessary. This doesn’t seem to me like a good enough reason to do things in West Texas.
It hadn’t occurred to me that brackish water is a resource. If brackish water has 1⁄10 as much salt as seawater, then it takes 1⁄10 as much energy (I think that is true both in theory and in practice, where practice is 10% efficient for both). So if you must desalinate water, it is a resource. I’m skeptical of desalination for agriculture. It’s quite expensive, even at 1⁄10 the price. Whereas humans consume very little water and desalination for residential use is cheap, comparable to the cost of distributing the water. Let people in Los Angeles water their yards as much as they like. If people want to live in West Texas, they can water their yards, too. But this isn’t a reason to live there.
If the goal is to produce food, is this the optimal use of energy? Maybe better to make fertilizer and export it to places that have their own water.
If the goal is to promote decentralization, then maybe you don’t want to export fertilizer. But you probably want to think more about what you mean by decentralization (eg, self-sufficiency to survive trade decline vs escape from political oppression).
The first hit on google says 1-4 parts per thousand, or about 1⁄10 as salty as seawater. If 0.5‰ is considered fresh, then that’s probably plenty low to support some plants.
The Soviets actually did try mining with nuclear explosives. They decided that it was too polluting. Since they had a pretty high pollution tolerance, I’m inclined to believe them.
I would distinguish terraforming from irrigation. It sounds like you are talking about setting up a self-contained system to irrigate the land every year, whereas I would restrict terraforming to a permanent climate change, so that it rains and desalination is no longer necessary. This is what people mean when they talk about terraforming the Sahara. The desert was green five thousand years ago when the pyramids were built, so it probably has multiple equilibrium climates and a sufficient intervention could get it to jump to the other. I don’t know how plausible this is for other deserts. The idea is to irrigate a bounded number of times to grow appropriate trees to trap water.
A permanent change could be much cheaper per acre because the solar panels and desalination plant can be reused for new parcels. The downside is that this probably has large gains from scale: air flows freely between neighboring parcels and thus humidity is an externality. Whereas the whole point of a self-contained system is that you can start small.