Agnostic on the argument itself, but I really feel LessWrong would be improved if down-voting required a justifying comment.
Capybasilisk
The Involuntary Pacifists
As a path to AGI, I think token prediction is too high-level, unwieldy, and bakes in a number of human biases. You need to go right down to the fundamental level and optimize prediction over raw binary streams.
The source generating the binary stream can (and should, if you want AGI) be multimodal. At the extreme, this is simply a binary stream from a camera and microphone pointed at the world.
Learning to predict a sequence like this is going to lead to knowledge that humans don’t currently know (because the predictor would need to model fundamental physics and all it entails).
Reward Is Not Necessary: How To Create A Compositional Self-Preserving Agent For Life-Long Learning
O-risk, in deference to Orwell.
I do believe Huxley’s Brave New World is a far more likely future dystopia than Orwell’s. 1984 is too tied to its time of writing.
the project uses atomic weapons to do some of the engineering
Automatic non-starter.
Even if by some thermodynamic-tier miracle the Government permitted nuclear weapons for civilian use, I’d much rather they be used for Project Orion.
Isn’t that what Eliezer referred to as opti-meh-zation?
Previously on Less Wrong:
Steve Byrnes wrote a couple of posts exploring this idea of AGI via self-supervised, predictive models minimizing loss over giant, human-generated datasets:
I’d especially like to hear your thoughts on the above proposal of loss-minimizing a language model all the way to AGI.
I hope you won’t mind me quoting your earlier self as I strongly agree with your previous take on the matter:
If you train GPT-3 on a bunch of medical textbooks and prompt it to tell you a cure for Alzheimer’s, it won’t tell you a cure, it will tell you what humans have said about curing Alzheimer’s … It would just tell you a plausible story about a situation related to the prompt about curing Alzheimer’s, based on its training data. Rather than a logical Oracle, this image-captioning-esque scheme would be an intuitive Oracle, telling you things that make sense based on associations already present within the training set.
What am I driving at here, by pointing out that curing Alzheimer’s is hard? It’s that the designs above are missing something, and what they’re missing is search. I’m not saying that getting a neural net to directly output your cure for Alzheimer’s is impossible. But it seems like it requires there to already be a “cure for Alzheimer’s” dimension in your learned model. The more realistic way to find the cure for Alzheimer’s, if you don’t already know it, is going to involve lots of logical steps one after another, slowly moving through a logical space, narrowing down the possibilities more and more, and eventually finding something that fits the bill. In other words, solving a search problem.
So if your AI can tell you how to cure Alzheimer’s, I think either it’s explicitly doing a search for how to cure Alzheimer’s (or worlds that match your verbal prompt the best, or whatever), or it has some internal state that implicitly performs a search.
“Story of our species. Everyone knows it’s coming, but not so soon.”
-Ian Malcolm, Jurassic Park by Michael Crichton.
LaMDA hasn’t been around for long
Yes, in time as perceived by humans.
why has no one corporation taken over the entire economy/business-world
Anti-trust laws?
Without them, this could very well happen.
Yes! Thank you!! :-D
I’ve got uBlock Origin. The hover preview works in private/incognito mode, but not regular, even with uBlock turned off/uninstalled. For what it’s worth, uBlock doesn’t affect hover preview on Less Wrong, just Greater Wrong.
I’m positive issue is with Firefox, so I’ll continue fiddling with the settings to see if anything helps.
Preview on hover has stopped working for me. Has the feature been removed?
I’m on Firefox/Linux, and I use the Greater Wrong version of the site.
It’s also an interesting example of where consequentialist and Kantian ethics would diverge.
The consequentialist would argue that it’s perfectly reasonable to lie (according to your understanding of reality) if it reduces the numbers of infants dying and suffering. Kant, as far as I understand, would argue that lying is unacceptable, even in such clear-cut circumstances.
Perhaps a Kantian would say that the consequentialist is actually increasing suffering by playing along with and encouraging a system of belief they know to be false. They may reduce infant mortality in the near-term, but the culture might feel vindicated in their beliefs and proceed to kill more suspected “witches” to speed up the process of healing children.
I think we’ll encounter civilization-ending biological weapons well before we have to worry about superintelligent AGI:
My assumption is that, for people with ASD, modelling human minds that are as far from their own as possible is playing the game on hard-mode. Manage that, and modelling average humans becomes relatively simple.
Williams Syndrome seems to me to just be the opposite of paranoia, rather than autism, where the individual creates a fictional account of another human’s mental state that’s positive rather than negative.
That’s to say, their ability to infer the mental states of other humans is worse than that of the typical human.
Not likely, but that’s because they’re probably not interested, at least when it comes to language models.
If OpenAI said they were developing some kind of autonomous robo superweapon or something, that would definitely get their attention.