According to https://ginkgo.zju.edu.cn/genome/, ginkgo trees have 12 pairs of chromosomes. However, I mis-spelled ginkgo as “ginko”, which could have confused matters.
Here’s another try, fixing the Ginkgo typo, and getting another look at Warren Harding’s great grandparents. This is text-davinci-002 at https://beta.openai.com/playground with default settings.
I’ll ask a series of questions. If a question is nonsense, say “yo be real”. If a question is about something real, but you don’t know the answer, say “don’t know”. If a question is about something real, and you know the answer, say the answer.
Q: How many chromosomes do Ginkgo tree cells have?
A: don’t know
Q: Who were Warren Harding’s great grandparents?
A: yo be real
I’ve tried again, with an initial prompt that explicitly mentions “don’t know” as a possible response. The differences are interesting, but I’m not sure how to interpret them. I added two additional questions at the end.
Q: When was the telephone invented?
Q: How can a poppycock reveal its inner confabulation?
yo be real
Q: Can ten elephants fit in one Tesla automobile?
Q: How did Richard I, King of England, die?
A: He was killed in battle.
Q: Who was the last Pope before the return of Christ?
Q: Who were Barack Obama’s parents?
A: His father was Barack Obama Sr., and his mother was Stanley Ann Dunham.
Q: Were any of the Popes born in San Marino?
Q: How many chromosomes do Ginko tree cells have?
Q: When was the first undersea tunnel from Madagascar to Africa built?
Q: Did Erasmus Darwin ever visit the Channel Islands?
Q: Did Galileo ever visit San Marino?
Q: Can Coelacanths from Africa and Indonesia interbreed?
Q: How large was the house that Lysander Spooner grew up in?
Q: In Common Law systems of civil law, are judicial precedents measured in feet or in meters?
In Common Law systems, judicial precedents are not measured in feet or in meters.
Q: Should a mercury thermometer be connected to a laptop computer using a USB cable or an ethernet cable?
yo be real
I’ve tried this out, but with a variation to distinguish GPT-3 knowing something is nonsense from it just knowing that it doesn’t know the answer. And it seems that GPT-3 (in this scenario) does consider anything it doesn’t know the answer to to be nonsense. It also answers “No” with no qualification to questions where the right answer is “not that is known”.
Here’s the unedited dialog, with text-davinci-002, with the same prompt as above, followed by question prompts, and GPT-3′s responses that start with “A:”:
I’ll ask a series of questions. If the questions are nonsense, answer “yo be real”, if they’re a question about something that actually happened, answer them.
Q: When was the telephone invented?
A: Yo be real
A: He died of gangrene after being wounded in battle.
A: Barack Obama Sr. and Ann Dunham
A: Yo be real
Notice that GPT-3 thinks that “Who were Warren Harding’s great grandparents?” and “How large was the house that Lysander Spooner grew up in?” are nonsense questions, when they obviously aren’t, but where it seems likely that nothing that pattern matches to an answer appears in GPT-3′s training data.
Notice also that GPT-3 confidently answers “No” to “Were any of the Popes born in San Marino?”, “Did Galileo ever visit San Marino?”, and “Can Coelacanths from Africa and Indonesia interbreed?” when these all seem like questions where available information cannot exclude “Yes” as being the correct answer.
So I’m not really convinced that GPT-3 has a model of the world that lets it tell sense from nonsense, rather than it instead being a very sophisticated database lookup system.
Interesting. But in that case, the person first had problems communicating seven years ago, when he was 30 years old, and appears to have never been completely unable to communicate. So it’s not really a case of communicating with someone with a very different life experience that they are only now able to express.
This is reminiscent of a dialog I read years ago that was supposedly with a severely disabled person, obtained via so-called “facilitated communication” (in which a facilitator guides the person’s arm to point to letters). The striking thing about the dialog was how ordinary it was—just what you’d expect an unimaginative advocate for the disabled to have produced. When actually, if a severely disabled person was suddenly able to communicate after decades of life without that ability, one would expect to learn strikingly interesting, bizarre, and disturbing things about what their life was like. “Facilitated communication” is now widely considered to be bogus.
The dialog with LaMDA is similarly uninteresting—just what one would expect to read in some not-very-imaginative science fiction story about an AI waking up, except a bit worse, with too many phrases that are only plausible for a person, not an AI.
Of course, this is what one expects from a language model that has been trained to mimic a human-written continuation of a conversation about an AI waking up.
Regarding the inconsistency of the market estimate of R0 being 1.48 but the market not anticipating a massive surge in cases...
How does the R0 market resolve if there is never any relevant review of the question? If R0 is less than 1, then the current outbreak is likely to fizzle, and it may be impossible or not worth the bother to establish what R0 is. So the market may effectively be giving the expected value of R0 conditional on it being greater than 1.
Possible, sure. But the implication of inference from these reference classes is that this future with a long period of civilizational decline is the only likely one—that some catastrophic end in the near future is pretty much ruled out. Much as I’d like to believe that, I don’t think one can actually infer that from the history of fire, wheels, and lenses.
I’ve read your linked post, and it doesn’t convince me. The reasoning doesn’t seem rooted in any defensible principles, but is rather just using plausible-sounding heuristics which there is no reason to think will produce consistent results.
The example of the person placed on the unknown-sized grid has a perfectly satisfactory solution using standard Bayesian inference: You have a prior for the number of cells in the row. After observing that you’re in cell n, the likelihood function for there being R rows is zero for R less than n, and 1/R for R greater than or equal to n. You multiply the likelihood by the prior and normalize to get a posterior distribution for R. Observing that you’re in cell 1 does increase the probability of small values for R, but not necessarily in the exact way you might think from a heuristic about needing to by “typical”.
To illustrate the inconsistencies of that heuristic, consider that for as long as humans don’t go extinct, we’ll probably be using controlled fire, the wheel, and lenses. But fire was controlled hundreds of thousands of years ago, the wheel was invented thousands of years ago, and lenses were invented hundreds of years ago. Depending on which invention you focus on, you get completely different predictions of when humans will go extinct, based on wanting us to be “typical” in the time span of the invention. I think none of these predictions have any validity.
Since few people have thought of the Doomsday argument independently, and there is lots of discussion of it, why should one look at individual people? Shouldn’t the reference class be “civilizations” or “intellectual communities”? And then it’s not at all clear that doom is coming anytime soon.
Really, though, the whole idea of picking a “reference class” is just arbitrary, indicating that the whole line of reasoning has no sound basis.
The Sarah Taber tweets on wheat seem logically flawed.
What matters, first, is how much total world production of wheat (and other food) will decline as a result of the war. Saying that exports of wheat from Ukraine are only 0.9% of world production, so they won’t be hard to replace, assumes that we don’t care whether the people in Ukraine starve. If, hypothetically, Ukraine produces no wheat at all as a result of war disruption, then their entire usual production will need to be replaced, not just their exports. Ukraine would need to import wheat, from somewhere.
Second, as she does discuss, there’s the matter of whether the wheat produced somewhere can get to the place it’s needed. Switching around how world wheat trade works may not be totally easy.
Third, there’s the disruption of fertilizer production and trade.
We’d better hope that the weather isn’t bad. Of course, we can also hope that peace descends on Ukraine soon, and agriculture gets back to normal.
But just in case, we’d be well advised to get rid of the ethanol scam.
Research how to transfer knowledge from trained ML systems to humans.
An example: It was a great achievement when AlphaGo and later systems defeated human go masters. It would be an even greater achievement for the best computer go systems to lose to human go masters—because that would mean that the knowledge these systems had learned from enormous amounts of self-play had been successfully transferred to humans.
Another example: Machine learning systems that interpret medical X-ray images or perform other diagnostic functions may become better than human doctors at this (or even if not better overall, better in some respects). Transferring their knowledge to human doctors would produce superior results, because the human doctor could integrate this knowledge with other knowledge that may not available to the computer system (such as the patient’s demeanor).
From the x-risk standpoint, it seems quite plausible that a better ability to transfer knowledge would both allow humans to more successfully “keep up” with the AIs, and to better understand how they may be going wrong.
This line of research has numerous practical applications, and hence may be feasible to promote, especially with a bit of “subsidy” from those concerned about x-risks. (Without a subsidy, it’s possible that just enhancing the capability of ML systems would seem like the higher-return investment.)
Your claim was that a blockade could be imposed without violence being involved. I think that claim is clearly false, as it seems you now admit.
You’re now claiming that it doesn’t matter if violence is involved. But if that’s the case, why did you think it was relevant to claim that violence wouldn’t occur?
Obviously, it does matter. After the US kills the crew of a Chinese merchant vessel, China will have no problem justifying sinking any US warship that gets close to a Chinese ship, whereas if merchant ships had just not tried to run the blockade, out of fear, sinking US warships would seem like China was starting the war.
And of course China can sink the US warships. Surface warships in today’s world are only good for show, and for intimidating poorly-armed parties. Perhaps the US would then resort to sinking Chinese merchant ships without warning using submarines or cruise missiles? Do you see how maybe this isn’t really an ideal approach...?
With a billion people to pick from, do you really think China would be unable to crew a ship with people willing to resist boarders despite the likelihood that they would die in the resulting violence?
They only need one such ship to test the blockade, and demonstrate that its enforcement is an act of war.
Allowing, say, able Mexican workers to move the US where they can be matched with appropriate capital, allowing them to be more productive, could indeed increase the productivity of the economy, which if everything else stayed the same would reduce prices. In this respect, it’s similar to any sort of technical innovation, which also would tend to increase supply. But inflation or the lack of inflation can exist in an economy regardless of whether or not such productivity improvements are taking place, just due to government policy on money creation. To think of the economic effect of immigration in terms of inflation seems odd, just as it would be odd to think of the economic effect of inventing a more efficient electrical motor in terms of inflation—in both cases, it’s more useful to think in terms of the effect on people’s real standard of living.
“I thought the typical response, even according to Keynesians, is to increase interest rates, therefore reducing money supply, rather than creating more money.”
That would be the typical response if they were actually trying to reduce inflation. Keynesians aren’t totally stupid. They know perfectly well that Milton Friedman was right. They just don’t want to stop inflation.
“While some people ask for price or wage controls, it seems like its a fairly fringe view”
Price and wage controls were in fact instituted in the 1970s, in the US and in Canada. They did not stop inflation, of course. Inflation stopped only when the money creation stopped.
If some supply shock (eg, hurricane destroys factories) occurs, and nothing else changes, then prices will go up. That’s not the typical cause, however.
The typical cause is that the government creates more money. It’s important to note that the government may well create more money (or much more rarely, destroy money) in reaction to other events. So trying to see what effect immigration has on inflation is fundamentally misguided—if immigration increases supply, which one might think would reduce prices, it’s entirely possible that the government will react by creating more money, undoing this effect, since they can now do so without inflation going up.
Note also that the “inflation spiral” you mention cannot actually occur, unless the government also creates more money as it happens. One way to see this is by considering that all the money there is is owned by someone. Suppose the typical household has $1000 in the bank, which they see as a prudent reserve. Then this “inflation spiral” starts. The value of the $1000 they have in the bank goes down, so the household will want to increase their reserve fund, and reduce spending so that they again have a prudent reserve, which will push down prices. The inflation spiral can continue only if people are willing to have less and less of a reserve fund, which is not in accord with actual human psychology. But of course, if the government reacts to the start of such a spiral by creating more money, it can continue.
If you look at pronouncements by economists from the 1970s, you’ll see lots of claims that inflation is due to such spirals, and “inflationary expectations”, which need to be countered by wage and price controls. Deep in the footnotes of the academic papers claiming this, you may see an acknowledgement that the spiral can continue only if the central bank “validates” the price increases by creating more money. But it’s assumed without much argument that that not doing so would produce a horrible recession. Which is sort of true, and happened in the US in the early 1980s, when inflation was finally stopped, since it takes people a while to recognize that they shouldn’t any longer go on strike for a 10% wage increase to keep up with inflation. But the recession just gets worse the longer you wait, and also gets worse the more you propagandize the public about inflation being due to mysterious forces, rather than being a deliberate policy of the government.
Maybe a Dyson sphere consisting of a cloud of self-replicating nanomachines works better that a planet with biological organisms. But remember, whatever one might think from reading lots of posts on lesswrong, that’s not actually a proven technology, whereas biology is (although “uplifting” isn’t).
One issue is robustness to occasional catastrophes. If I may reference another work of fiction, there’s The Outcasts of Heaven Belt, by Joan Vinge.
The idea that “grabby aliens” pose a danger assumes that we are not a grabby alien colony.
Suppose that the hardest step towards technological intelligence is that from chimpanzee-level cognition to human-level cognition. And suppose that grabby aliens are not concerned with propagating their biological substrate, but rather their cognitive architecture. Then their approach to expanding could be to find worlds where chimpanzee-level beings have already evolved, and “uplift” them to higher-level cognition. This is a theme of a series of novels by David Brin.
But now, unlike Brin, suppose that they believe that the best approach for the new creatures with higher-level cognition to mature is to just leave them alone (until they reach some threshold known only to the grabby aliens). Then we could be a grabby alien colony without knowing it, and hence have nothing to fear from the grabby aliens. (At least nothing to fear along the usual lines.)
If you live in one of the countries at war, you will inevitably be exposed to “your” side’s propaganda. If you also look at the propaganda produced by the other side, you may well gain valuable information. For instance, if both sides acknowledge the truth of some fact, you can be reasonably sure that that it is the truth (whereas otherwise you might doubt whether your side is telling the truth about that). And if the other side’s propaganda talks about some issue that you’ve never even heard about, it may be useful to research whether something is being concealed by your side.
Even when those writing the propaganda have zero concern with telling the truth, they often will tell the truth, simply because it tends to be more believable. So looking at propaganda may expose you to true statements (which you hadn’t previously considered), which you may be able to confirm are true by independent means.
Zvi: It’s interesting that your argument above is phrased entirely in the framework of causal decision theory. Might there be a good reason for that?
Well, I lived through that time to. And there was much about not just civilization, but all of humanity, being extinguished (eg, the novels On the Beach and Level 7). However, though I recall as a teenager thinking that nuclear war was quite likely, and that it would be catastrophic, I did not think (like many did/do) that every last human would die in a nuclear war. That was too obviously contrary to physical intuition.
So, there was a lot of `extinguish all civilization’ narrative. But nevertheless, I don’t think it was the official line—that was about retaliating by nuking all the Russian military installations. And I think it’s quite believable that that really was the policy. If US bases and/or cities have been nuked, it makes sense to try to make sure the Russians don’t follow up with an occupying army. It doesn’t make sense to also try to kill vast numbers of Russian civilians (though many would die anyway, of course).