Not least being the military implications. If you have widely available tech that lets you quickly and cheaply accelerate something car-sized to a velocity of Mach Fuck (they’re meant to circle the earth in 4.2 hours, making them 2 or 3 times faster than a rifle bullet), that’s certainly a dual use technology.
Robert Miles
AI Safety Chatbot
Covid was a big learning experience for me, but I’d like to think about more than one example. Covid is interesting because, compared to my examples of birth control and animal-free meat, it seems like with covid humanity smashed the technical problem out of the park, but still overall failed by my lights because of the political situation.
How likely does it seem that we could get full marks on solving alignment but still fail due to politics? I tend to think of building a properly aligned AGI as a straightforward win condition, but that’s not a very deeply considered view. I guess we could solve it on a whiteboard somewhere but for political reasons it doesn’t get implemented in time?
Holly Elmore and Rob Miles dialogue on AI Safety Advocacy
Stampy’s AI Safety Info soft launch
I think almost all of these are things that I’d only think after I’d already noticed confusion, and most are things I’d never say in my head anyway. A little way into the list I thought “Wait, did he just ask ChatGPT for different ways to say “I’m confused”?”.
I expect there are things that pop up in my inner monologue when I’m confused about something, that I wouldn’t notice, and it would be very useful to have a list of such phrases, but your list contains ~none of them.
Edit: Actually the last three are reasonable. Are they human written?
One way of framing the difficulty with the lanternflies thing is that the question straddles the is-ought gap. It decomposes pretty cleanly into two questions: “What states of the universe are likely to result from me killing vs not killing lanternflies” (about which Bayes Rule fully applies and is enormously useful), and “Which states of the universe do I prefer?”, where the only evidence you have will come from things like introspection about your own moral intuitions and values. Your values are also a fact about the universe, because you are part of the universe, so Bayes still applies I guess, but it’s quite a different question to think about.
If you have well defined values, for example some function from states (or histories) of the universe to real numbers, such that larger numbers represent universe states that you would always prefer over smaller numbers, then every “should I do X or Y” question has an answer in terms of those values. In practice we’ll never have that, but still it’s worth thinking separately about “What are the expected consequences of the proposed policy?” and “What consequences do I want”, which a ‘should’ question implicitly mixes together.
I’ve always thought of it like, it doesn’t rely on the universe being computable, just on the universe having a computable approximation. So if the universe is computable, SI does perfectly, if it’s not, SI does as well as any algorithm could hope to.
A slightly surreal experience to read a post saying something I was just tweeting about, written by a username that could plausibly be mine.
Do we even need a whole new term for this? Why not “Sudden Deceptive Alignment”?
I think in some significant subset of such situations, almost everyone present is aware of the problem, so you don’t always have to describe the problem yourself or explicitly propose solutions (which can seem weird from a power dynamics perspective). Sometimes just drawing the group’s attention to the meta level at all, initiating a meta-discussion, is sufficient to allow the group to fix the problem.
This is good and interesting. Various things to address, but I only have time for a couple at random.
I disagree with the idea that true things necessarily have explanations that are both convincing and short. In my experience you can give a short explanation that doesn’t address everyone’s reasonable objections, or a very long one that does, or something in between. If you understand some specific point about cutting edge research, you should be able to properly explain it to a lay person, but by the time you’re done they won’t be a lay person any more! If you restrict your explanation to “things you can cover before the person you’re explaining to decides this isn’t worth their time and goes away”, many concepts simply cannot ever be explained to most people, because they don’t really want to know.
So the core challenge is staying interesting enough for long enough to actually get across all of the required concepts. On that point, have you seen any of my videos, and do you have thoughts on them? You can search “AI Safety” on YouTube.
Similarly, do you thoughts on AISafety.info ?
- 25 May 2023 21:26 UTC; 6 points) 's comment on Adumbrations on AGI from an outsider by (
Are we not already doing this? I thought we were already doing this. See for example this talk I gave in 2018
https://youtu.be/pYXy-A4siMw?t=35
I guess we can’t be doing it very well though
Structured time boxes seem very suboptimal, steamrollering is easy enough to deal with by a moderator “Ok let’s pause there for X to respond to that point”
This would make a great YouTube series
Edit: I think I’m going to make this a YouTube series
Other tokens that require modelling more than a human:
The results sections of scientific papers—requires modelling whatever the experiment was about. If humans could do this they wouldn’t have needed to run the experiment
Records of stock price movements—in principle getting zero loss on this requires insanely high levels of capability
Compare with this from Meditations on Moloch:
Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced. So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if they don’t, everyone else will kill them, and so on.
Seems to me a key component here, which flows naturally from “punish any deviation from the profile” is this pattern of ‘punishment of non-punishers’.
The historical trends thing is prone to standard reference class tennis. Arguments like “Every civilization has collapsed, why would ours be special? Something will destroy civilisation, how likely is it that it’s AI?”. Or “almost every species has gone extinct. Something will wipe us out, could it be AI?”. Or even “Every species in the genus homo has been wiped out, and the overwhelmingly most common cause is ‘another species in the genus homo’, so probably we’ll do it to ourselves. What methods do we have available?”.
These don’t point to AI particularly, they remove the unusual-seemingness of doom in general
Oh, I missed that! Thanks. I’ll delete I guess.
This is an interesting post!
Haha! haaaaa 😢