My birds are singing the same tune.
Prometheus
Going to the moon
Say you’re really, really worried about humans going to the moon. Don’t ask why, but you view it as an existential catastrophe. And you notice people building bigger and bigger airplanes, and warn that one day, someone will build an airplane that’s so big, and so fast, that it veers off course and lands on the moon, spelling doom. Some argue that going to the moon takes intentionality. That you can’t accidentally create something capable of going to the moon. But you say “Look at how big those planes are getting! We’ve gone from small fighter planes, to bombers, to jets in a short amount of time. We’re on a double exponential of plane tech, and it’s just a matter of time before one of them will land on the moon!”
Contra Scheming AIs
There is a lot of attention on mesaoptimizers, deceptive alignment, and inner misalignment. I think a lot of this can fall under the umbrella of “scheming AIs”. AIs that either become dangerous during training and escape, or else play nice until humans make the mistake of deploying them. Many have spoken about the lack of an indication that there’s a “humanculi-in-a-box”, and this is usually met with arguments that we wouldn’t see such things manifest until AIs are at a certain level of capability, and at that point, it might be too late, making comparisons to owl eggs, or baby dragons. My perception is that getting something like a “scheming AI” or “humanculi-a-box” isn’t impossible, and we could (and might) develop the means to do so in the future, but that it’s a very, very different kind of thing than current models (even at superhuman level), and that it would take a degree of intentionality.
“To the best of my knowledge, Vernor did not get cryopreserved. He has no chance to see the future he envisioned so boldly and imaginatively. The near-future world of Rainbows End is very nearly here… Part of me is upset with myself for not pushing him to make cryonics arrangements. However, he knew about it and made his choice.”
https://maxmore.substack.com/p/remembering-vernor-vinge
I agree that consequentialist reasoning is an assumption, and am divided about how consequentialist an ASI might be. Training a non-consequentialist ASI seems easier, and the way we train them seems to actually be optimizing against deep consequentialism (they’re rewarded for getting better with each incremental step, not for something that might only be better 100 steps in advance). But, on the other hand, humans
don’t seem to have been heavily optimized for this either*, yet we’re capable of forming multi-decade plans (even if sometimes poorly).*Actually, the Machiavellian Intelligence Hypothesis does seem to be optimizing consequentialist reasoning (if I attack Person A, how will Person B react, etc.)
This is the kind of political reasoning that I’ve seen poisoning LW discourse lately and gets in the way of having actual discussions. Will posits essentially an impossibility proof (or, in it’s more humble form, a plausibility proof). I humor this being true, and state why the implications, even then, might not be what Will posits. The premise is based on alignment not being enough, so I operate on the premise of an aligned ASI, since the central claim is that “even if we align ASI it may still go wrong”. The premise grants that the duration of time it is aligned is long enough for the ASI to act in the world (it seems mostly timescale agnostic), so I operate on that premise. My points are not about what is most likely to actually happen, the possibility of less-than-perfect alignment being dangerous, the AI having other goals it might seek over the wellbeing of humans, or how we should act based on the information we have.
I’m not sure who are you are debating here, but it doesn’t seem to be me.
First, I mentioned that this was an analogy, and mentioned that I dislike even using them, which I hope implied I was not making any kind of assertion of truth. Second, “works to protect” was not intended to mean “control all relevant outcomes of”. I’m not sure why you would get that idea, but that certainly isn’t what I think of first if someone says a person is “working to protect” something or someone. Soldiers defending a city from raiders are not violating control theory or the laws of physics. Third, the post is on the premise that “even if we created an aligned ASI”, so I was working with that premise that the ASI could be aligned in a way that it deeply cared about humans. Four, I did not assert that it would stay aligned over time… the story was all about the ASI not remaining aligned. Five, I really don’t think control theory is relevant here. Killing yourself to save a village does not break any laws of physics, and is well within most human’s control.
My ultimate point, in case it was lost, was that if we as human intelligences could figure out an ASI would not stay aligned, an ASI could also figure it out. If we, as humans, would not want this (and the ASI was aligned with what we want), then the ASI presumably would also not want this. If we would want to shut down an ASI before it became misaligned, the ASI (if it wants what we want) would also want this.
None of this requires disassembling black holes, breaking the laws of physics, or doing anything outside of that entities’ control.
I’ve heard of many such cases of this from EA Funds (including myself). My impression is that they only had one person working full-time managing all three funds (no idea if this has changed since I applied or not).
An incapable man would kill himself to save the village. A more capable man would kill himself to save the village AND ensure no future werewolves are able to bite villagers again.
Though I tend to dislike analogies, I’ll use one, supposing it is actually impossible for an ASI to remain aligned. Suppose a villager cares a whole lot about the people in his village, and routinely works to protect them. Then, one day, he is bitten by a werewolf. He goes to the Shammon, he tells him when the Full Moon rises again, he will turn into a monster, and kill everyone in the village. His friends, his family, everyone. And that he will no longer know himself. He is told there is no cure, and that the villagers would be unable to fight him off. He will grow too strong to be caged, and cannot be subdued or controlled once he transforms. What do you think he would do?
MIRI “giving up” on solving the problem was probably a net negative to the community, since it severely demoralized many young, motivated individuals who might have worked toward actually solving the problem. An excellent way to prevent pathways to victory is by convincing people those pathways are not attainable. A positive, I suppose, is that many have stopped looking to Yudkowsky and MIRI for the solutions, since it’s obvious they have none.
I don’t think this is the case. For awhile, the post with the highest karma was Paul Christiano explaining all the reasons he thinks Yudkowsky is wrong.
Fair. What would you call a “mainstream ML theory of cognition”, though? Last I checked, they were doing purely empirical tinkering with no overarching theory to speak of (beyond the scaling hypothesis
It tends not to get talked about much today, but there was the PDP (connectionist) camp of cognition vs. the camp of “everything else” (including ideas such as symbolic reasoning, etc). The connectionist camp created a rough model of how they thought cognition worked, a lot of cognitive scientists scoffed at it, Hinton tried putting it into actual practice, but it took several decades for it to be demonstrated to actually work. I think a lot of people were confused by why the “stack more layers” approach kept working, but under the model of connectionism, this is expected. Connectionism is kind of too general to make great predictions, but it doesn’t seem to allow for FOOM-type scenarios. It also seems to favor agents as local optima satisficers, instead of greedy utility maximizers.
“My position is that there are many widespread phenomena in human cognition that are expected according to my model, and which can only be explained by the more mainstream ML models either if said models are contorted into weird shapes, or if they engage in denialism of said phenomena.”
Such as? I wouldn’t call Shard Theory mainstream, and I’m not saying mainstream models are correct either. On human’s trying to be consistent decision-makers, I have some theories about that (and some of which are probably wrong). But judging by how bad humans are at it, and how much they struggle to do it, they probably weren’t optimized too strongly biologically to do it. But memetically, developing ideas for consistent decision-making was probably useful, so we have software that makes use of our processing power to be better at this, even if the hardware is very stubborn at times. But even that isn’t optimized too hard toward coherence. Someone might prefer pizza to hot dogs, but they probably won’t always choose pizza over any other food, just because they want their preference ordering of food to be consistent. And, sure, maybe what they “truly” value is something like health, but I imagine even if they didn’t, they still wouldn’t do this.
But all of this is still just one piece on the Jenga tower. And we could debate every piece in the tower, and even get 90% confidence that every piece is correct… but if there are more than 10 pieces on the tower, the whole thing is still probably going to come crashing down. (This is the part where I feel obligated to say, even though I shouldn’t have to, that your tower being wrong doesn’t mean “everything will be fine and we’ll be safe”, since the “everything will be fine” towers are looking pretty Jenga-ish too. I’m not saying we should just shrug our shoulders and embrace uncertainty. What I want is to build non-Jenga-ish towers)
This isn’t what I mean. It doesn’t mean you’re not using real things to construct your argument, but that doesn’t mean the structure of the argument reflects something real. Like, I kind of imagine it looking something like a rationalist Jenga tower, where if one piece gets moved, it all crashes down. Except, by referencing other blog posts, it becomes a kind of Meta-Jenga: a Jenga tower composed of other Jenga towers. Like “Coherent decisions imply consistent utilities”. This alone I view to be its own mini Jenga tower. This is where I think String Theorists went wrong. It’s not that humans can’t, in theory, form good reasoning based on other reasoning based on other reasoning and actually arrive at the correct answer, it’s just that we tend to be really, really bad at it.
The sort of thing that would change my mind: there’s some widespread phenomenon in machine learning that perplexes most, but is expected according to your model, and any other model either doesn’t predict it as accurately, or is more complex than yours.
I dislike the overuse of analogies in the AI space, but to use your analogy, I guess it’s like you keep assigning a team of engineers to build a car, and two possible things happen. Possibility One: the engineers are actually building car engines, which gives us a lot of relevant information for how to build safe cars (toque, acceleration, speed, other car things), even if we don’t know all the details for how to build a car yet. Possibility Two: they are actually just building soapbox racers, which doesn’t give us much information for building safe cars, but also means that just tweaking how the engineers work won’t suddenly give us real race cars.
If progress in AI is continuous, we should expect record levels of employment. Not the opposite.
My mentality is if progress in AI doesn’t have a sudden, foom-level jump, and if we all don’t die, most of the fears of human unemployment are unfounded… at least for a while. Say we get AIs that can replace 90% of the workforce. The productivity surge from this should dramatically boost the economy, creating more companies, more trading, and more jobs. Since AIs can be copied, they would be cheap, abundant labor. This means anything a human can do that an AI still can’t becomes a scarce, highly valued resource. Companies with thousands or millions of AI instances working for them would likely compete for human labor, because making more humans takes much longer than making more AIs. Then say, after a few years, AIs are able to automate 90% of the remaining 10%. Then that creates even more productivity, more economic growth, and even more jobs. This could continue for even a few decades. Eventually, humans will be rendered completely obsolete, but by that point (most) of them might be so filthy rich that they won’t especially care.
This doesn’t mean it’ll all be smooth-sailing or that humans will be totally happy with this shift. Some people probably won’t enjoy having to switch to a new career, only for that new career to be automated away after a few years, and then have to switch again. This will probably be especially true for people who are older, those who have families, want a stable and certain future, etc. None of this will be made easier by the fact it’ll probably be hard to tell when true human obsolescence is on the horizon, so some might be in a state of perpetual anxiety, and others will be in constant denial.
I think my main problem with this is that it isn’t based on anything. Countless times, you just reference other blog posts, which reference other blog posts, which reference nothing. I fear a whole lot of people thinking about alignment are starting to decouple themselves from reality. It’s starting to turn into the AI version of String Theory. You could be correct, but given the enormous number of assumptions your ideas are stacked on (and that even a few of those assumptions being wrong leads to completely different conclusions), the odds of you even being in the ballpark of correct seem unlikely.
At first I strong-upvoted this, because I thought it made a good point. However, upon reflection, that point is making less and less sense to me. You start by claiming current AIs provide nearly no data for alignment, that they are in a completely different reference class from human-like systems… and then you claim we can get such systems with just a few tweaks? I don’t see how you can go from a system that, you claim, provides almost no data for studying how an AGI would behave, to suddenly having a homunculus-in-the box that becomes superintelligent and kills everyone. Homunculi seem really, really hard to build. By your characterization of how different actual AGI is from current models, it seems this would have to be fundamentally architecturally different from anything we’ve built so far. Not some kind of thing that would be created by near-accident.
Contra One Critical Try: AIs are all cursed
I don’t feel like making this a whole blog post, but my biggest source for optimism for why we won’t need to one-shot an aligned superintelligence is that anyone who’s trained AI models knows that AIs are unbelievably cursed. What do I mean by this? I mean even the first quasi-superintelligent AI we get will have so many problems and so many exploits that taking over the world will simply not be possible. Take a “superintelligence” that only had to beat humans at the very constrained game of Go, which is far simpler than the real world. Everyone talked about how such systems were unbeatable by humans, until some humans used a much “dumber” AI to find glaring holes in Leela Zero’s strategy. I expect, in the far more complex “real world”, a superintelligence will have even more holes, and even more exploits, a kind of “swiss chess superintelligence”. You can say “but that’s not REAL superintelligence”, and I don’t care, and the AIs won’t care. But it’s likely the thing we’ll get first. Patching all of those holes, and finding ways to make such an ASI sufficiently not cursed will also probably mean better understanding of how to stop it from wanting to kill us, if it wanted to kill us in the first place. I think we can probably get AIs that are sufficiently powerful in a lot of human domains, and can probably even self-improve, and still be cursed. The same way we have AIs with natural language understanding, something once thought to be a core component of human intelligence, that are still cursed. A cursed ASI is a danger for exploitation, but it’s also an opportunity.
It probably began training in January and finished around early April. And they’re now doing evals.