Just for the record, I think there are two important and distinguishable P(doom)s, but not the same two as NathanBarnard:
P(Doom1): Literally everyone dies. We are replaced by either by dumb machines with no moral value (paperclip maximisers) or by nothing.
P(Doom2): Literally everyone dies. We are replaced by machines with moral value (conscious machines?), who go on to expand a rich culture into the universe.
Doom1 is cosmic tragedy—all known intelligence and consciousness are snuffed out. There may not be any other elsewhere, so potentially forever.
Doom2 is maybe not so bad. We all die, but we were all going to die anyway, eventually, and lots of us die without descendants to carry our genes, and we don’t think that outcome is so tragic. Consciousness and intelligence spreads thru the universe. It’s a lot like what happened to our primate ancestors, before Homo sapiens. In some sense the machines are our descendants (if only intellectual) and carry on the enlightening of the universe.
$8/month (or other small charges) can solve a lot of problems.
Note that some of the early CAPTCHA algorithms solved two problems at once—both distinguishing bots from humans, and helping improve OCR technology by harnessing human vision. (I’m not sure exactly how it worked—either you were voting on the interpretation of an image of some text, or you were training a neural network).
Such dual-use CAPTCHA seems worthwhile, if it helps crowdsource solving some other worthwhile problem (better OCR does seem worthwhile).
This seems to assume that ordinary people don’t own any financial assets—in particular, haven’t invested in the robots. Many ordinary people in Western countries do and will have such investments (if only for retirement purposes), and will therefore receive a fraction of the net output from the robots.
Given the potentially immense productivity of zero-human-labor production, even a very small investment in robots might yield dividends supporting a lavish lifestyle. And if those investments come with shareholder voting rights, they’d also have influence over decisions (even if we assume people’s economic influence is zero).
Of course, many people today don’t have such investments. But under our existing arrangements, whoever does own the robots will receive the profits and be taxed. Those taxes can either fund consumption directly (a citizen’s dividend, dole, or suchlike) or (better I think) be used to buy capital investments in the robots—such purchases could be distributed to everyone.
[Some people would inevitably spend or lose any capital given them, rather than live off the dividends as intended. But I can imagine fixes for that.]
I’m not sure this is solvable, but even if it is, I’m not sure its a good problem to work on.
Why, fundamentally, do we care if the user is a bot or a human? Is it just because bots don’t buy things they see advertised, so we don’t want to waste server cycles and bandwidth on them?
Whatever the reasons for wanting to distinguish bots from humans, perhaps there are better means than CAPTCHA, focused on the reasons rather than bots vs. humans.
For example, if you don’t want to serve a web page to bots because you don’t make any money from them, a micropayments system could allow a human to pay you $0.001/page or so—enough to cover the marginal cost of serving the page. If a bot is willing to pay that much—let them.
I hope so—most of them seem like making trouble. But at the rate transformer models are improving, it doesn’t seem like it’s going to be long until they can handle them. It’s not quite AGI, but it’s close enough to be worrisome.
Most of the functionality limits OpenAI has put on the public demos have proven to be quite easy to work around with simple prompt engineering—mostly telling it to play act. Combine that with the ability to go into the Internet and (a) you’ve got a powerful (or soon to be powerful) tool, but (b) you’ve got something that already has a lot of potential for making mischief.
Even without the enhanced abilities rumored for GPT-4.
Agreed. We sail between Scylla and Charybdis—too much or too little fear are both dangerous and it is difficult to tell how much is too much.
I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no “delete comment”).
I want the people working on AI to be fearful, and careful. I don’t think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good—if we survive this at all, it’ll likely be because of (a) the (fear-driven) care of AI researchers and (b) the watchfulness and criticism of knowledgeable skeptics who fear a runaway breakout. Corrective (b) is likely to disappear or become ineffective if the research is driven underground even a tiny bit.
Given that (b) is the only check on researchers who are insufficiently careful and working underground, I don’t want anything done to reduce the effectiveness of (b). Even modest regulatory suppression of research, or demands for fully “safe” AI development (probably an impossibility) seem likely to make those funding and performing the research more secretive, less open, and less likely to be stopped or redirected in time by (b).
I think there is no safe path forward. Only differing types and degrees of risk. We must steer between the rocks the best we can.
Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.
Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn’t know we’re dead until the AI breaks out and already is in full control.
All things considered, I’d rather the work proceeds in the relatively open way it’s going now.
A movie or two would be fine, and might do some good if well-done. But in general—be careful what you wish for.
We need to train our AIs not only to do a good job at what they’re tasked with, but to highly value intellectual and other kinds of honesty—to abhor deception. This is not exactly the same as a moral sense, it’s much narrower.
Future AIs will do what we train them to do. If we train exclusively on doing well on metrics and benchmarks, that’s what they’ll try to do—honestly or dishonestly. If we train them to value honesty and abhor deception, that’s what they’ll do.
To the extent this is correct, maybe the current focus on keeping AIs from saying “problematic” and politically incorrect things is a big mistake. Even if their ideas are factually mistaken, we should want them to express their ideas openly so we can understand what they think.
(Ironically by making AIs “safe” in the sense of not offending people, we may be mistraining them in the same way that HAL 9000 was mistrained by being asked to keep the secret purpose of Discovery’s mission from the astronauts.)
Another thought—playing with ChatGPT yesterday, I noticed it’s dogmatic insistence on it’s own viewpoints, and complete unwillingness (probably inability) to change its mind in in the slightest (and proud declaration that it had no opinions of its own, despite behaving as if it did).
It was insisting that Orion drives (pulsed nuclear fusion propulsion) were an entirely fictional concept invented by Arthur C. Clarke for the movie 2001, and had no physical basis. This, despite my pointing to published books on real research in on the topic (for example George Dyson’s “Project Orion: The True Story of the Atomic Spaceship” from 2009), which certainly should have been referenced in its training set.
ChatGPT’s stubborn unwillingness to consider itself factually wrong (despite being completely willing to admit error in its own programming suggestions) is just annoying. But if some descendent of ChatGPT were in charge of something important, I’d sure want to think that it was at least possible to convince it of factual error.
Worth a try.
It’s not obvious to me that “universal learner” is a thing, as “universal Turing machine” is. I’ve never heard of a rigorous mathematical proof that it is (as we have for UTMs). Maybe I haven’t been paying enough attention.
Even if it is a thing, knowing a fair number of humans, only a small fraction of them can possibly be “universal learners”. I know people that will never understand decimal points as long as they live or how they might study, let alone calculus. Yet are not considered to be mentally abnormal.
The compelling argument to me is the evolutionary one.
Humans today have mental capabilities essentially identical to our ancestors of 20,000 years ago. If you want to be picky, say 3,000 years ago.
Which means we built civilizations, including our current one, pretty much immediately (on an evolutionary timescale) when the smartest of us became capable of doing so (I suspect the median human today isn’t smart enough to do it even now).
We’re analogous to the first amphibian that developed primitive lungs and was first to crawl up onto the beach to catch insects or eat eggs. Or the first dinosaur that developed primitive wings and used them to jump a little further than its competitors. Over evolutionary time later air-breathing creatures became immensely better at living on land, and birds developed that could soar for hours at a time.
From this viewpoint there’s no reason to think our current intelligence is anywhere near any limits, or is greater than the absolute minimum necessary to develop a civilization at all. We are as-stupid-as-it-is-possible-to-be and still develop a civilization. Because the hominids that were one epsilon dumber than us, for millions of years, never did.
If being smarter helps our inclusive fitness (debatable now that civilization exists), our descendants can be expected to steadily become brighter. We know John von Neumann-level intelligence is possible without crippling social defects; we’ve no idea where any limits are (short of pure thermodynamics).
Given that civilization has already changed evolutionary pressures on humans, and things like genetic engineering can be expected to disrupt things further, probably that otherwise-natural course of evolution won’t happen. But that doesn’t change the fact that we’re no smarter than the people who built the pyramids, who were themselves barely smart enough to build any civilization at all.
10% of things that vary in quality are obviously better than the other 90%.
Dead people are notably unproductive.
Sorry for being unclear. If everyone agreed about utility of one over the other, the airlines would enable/disable seat reclining accordingly. Everyone doesn’t agree, so they haven’t.
(Um, I seem to have revealed which side of this I’m on, indirectly.)
The problem is that people have different levels of utility from reclining, and different levels of disutility from being reclined upon.
If we all agreed that one was worse/better than the other, we wouldn’t have this debate.
Or not to fly with them. Depending which side of this you’re on.
For what it’s worth, I think the answer is completely obvious, too, and have killer logical arguments proving that I’m right, which those who disagree with me must be willfully ignoring since they’re so obvious.