I wrote this earlier today. I post it here as a comment because there’s already top level post on the same topic.
Vernor Vinge, math professor at San Diego State University, hero of the science fiction community (a fan who eventually retired from his extremely good day job to write novels), science consultant, and major influence over the entire culture of the LW community, died due to Parkinson’s Disease on March 20th, 2024.
David Brin’s memoriam for Vinge is much better than mine, and I encourage you to read it. Vernor and David were colleagues and friends and that is a good place to start.
In 1993, Vernor published the non-fiction essay that coined the word “Singularity”.
In 1992, he published “A Fire Upon The Deep” which gave us such words as “godshatter” that was so taken-for-granted as “the limits of what a god can pack into a pile of atoms shaped like a human” that the linked essay doesn’t even define it.
As late as 2005 (or as early, if you are someone who thinks the current AI hype cycle came out of nowhere) Vernor was giving speeches about the Singularity, although my memory is that the timelines had slipped a bit between 1993 and 2005 so that in mid aughties F2F interactions he would often stick a thing in his speech that echoed the older text and say:
I’ll be surprised if this event occurs before
20052012 or after20302035.
Here in March 2024, I’d say that I’d be surprised if the event is publicly and visibly known to have happened before June 2024 or after ~2029.
(Foerester was more specific. He put the day that the GDP of Earth would theoretically become infinite on Friday, November 13, 2026. Even to me, this seems a bit much.)
Vernor Vinge will be missed with clarity now, but he was already missed by many, including me, because his last major work was Rainbows End in 2006, and by 2014 he had mostly retreated from public engagements.
He sometimes joked that many readers missed the missing apostrophe in the title, which made “Rainbows End” a sad assertion rather than a noun phrase about the place you find treasure. Each rainbow and all rainbows: end. They don’t go forever.
The last time I ever met him was at a Singularity Summit, back before SIAI changed its name to MIRI, and he didn’t recognize me, which I attributed to me simply being way way less important in his life than he was in mine… but I worried back then that maybe the cause was something less comforting than my own unimportance.
In Rainbows End, the protagonist, Robert Gu, awakens from a specific semi-random form of a neuro-degenerative brain disease (a subtype of Alzheimer’s not a subtype of Parkinson’s) that, just before the singularity really takes off, has been cured.
(It turned out, in the novel, that the AI takeoff was quite slow and broad, so that advances in computing sprinkled “treasures” on people just before things really became unpredictable. Also, as might be the case in real life, in the story it was true that neither Alzheimer’s, nor aging in general, was one disease with one cause and one cure, but a complex of things going wrong, where each thing could be fixed, one specialized fix at a time. So Robert Gu awoke to “a fully working brain” (from his unique type of Alzheimer’s being fixed) and also woke up more than 50% of the way to having “aging itself” cured, and so he was in a weird patchwork state of being a sort of “elderly teenager”.)
Then the protagonist headed to High School, and fell into a situation where he helped Save The World, because this was a trope-alicious way for a story to go.
But also, since Vernor was aiming to write hard science fiction, where no cheat codes exist, heading to High School after being partially reborn was almost a sociologically and medically plausible therapy for an imminent-singularity-world to try on someone half-resurrected by technology (after being partially erased by a brain disease).
It makes some sense! That way they can re-integrate with society after waking up into the new and better society that could (from their perspective) reach back in time and “retroactively save them”! :-)
It was an extremely optimistic vision, really.
In that world, medicine was progressing fast, and social systems were cohesive and caring, and most of the elderly patients in America who lucked into having something that was treatable, were treated.
I have no special insight into the artistic choices here, but it wouldn’t surprise me if Vernor was writing about something close to home, already, back then.
I’m planning on re-reading that novel, but I expect it to be a bit heartbreaking in various ways.
I’ll be able to see it from knowing that in 2024 Vernor passed. I’ll be able to see it from learning in 2020 that the American Medical System is deeply broken (possibly irreparably so (where one is tempted to scrap it and every durable institutional causally upstream of it that still endorses what’s broken, so we can start over)). I’ll be able to see it in light of 2016, when History Started Going Off The Rails and in the direction of dystopia. And I’ll be able to see Rainbows End in light of the 2024 US Presidential Election which be a pointless sideshow if it is not a referendum on the Singularity.
Vernor was an optimist, and I find such optimism more and more needed, lately.
I miss him, and I miss the optimism, and my missing of him blurs into missing optimism in general.
If we want literally everyone to get a happy ending, Parkinson’s Disease is just one tiny part of all the things we must fix, as part of Sir Francis Bacon’s Project aimed at “the effecting of all (good) things (physically) possible”.
Francis, Vernor, David, you (the reader), I (the author of this memoriam), and all the children you know, and all the children of Earth who were born in the last year, and every elderly person who has begun to suspect they know exactly how the reaper will reap them… we are all headed for the same place unless something in general is done (but really unless many specific things are done, one fix at a time...) and so, in my opinion, we’d better get moving.
Since science itself is big, there are lots of ways to help!
Fixing the world is an Olympian project, in more ways than one.
First, there is the obvious: “Citius, Altius, Fortius” is the motto of the Olympics, and human improvement and its celebration is a shared communal goal, celebrated explicitly since 2021 when the motto changed to “Citius, Altius, Fortius – Communiter” or “Faster, Higher, Stronger – Together”. Human excellence will hit a limit, but it is admirable to try to push our human boundaries.
Second, every Olympics starts and ends with a literal torch literally being carried. The torch’s fire is symbolically the light of Prometheus, standing for spirit, knowledge, and life. In each Olympic event the light is carried, by hand, from place to place, across the surface of the Earth, and across the generations. From those in the past, to we in the present, and then to those in the future. Hopefully it never ends. Also, we remember how it started.
Thirdly, the Olympics is a panhuman practice that goes beyond individuals and beyond governments and aims, if it aims for any definite thing, for the top of the mountain itself, though the top of the mountain is hidden in clouds that humans can’t see past, and dangerous to approach. Maybe some of us ascend, but even if not, we can imagine that the Olympians see our striving and admire it and offer us whatever help is truly helpful.
The last substantive talk I ever heard from Vernor was in a classroom on the SDSU campus in roughly 2009, with a bit over a dozen of us in the audience and he talked about trying to see to and through the Singularity, and he had lately become more interested in fantasy tropes that might be amenable to a “hard science fiction” treatment, like demonology (as a proxy for economics?) or some such. He thought that a key thing would be telling the good entities apart from the bad ones. Normally, in theology, this is treated as nearly impossible. Sometimes you get “by their fruits ye shall know them” but that doesn’t help prospectively. Some programmers nowadays advocate building the code from scratch, to do what it says on the tin, and have the label on the tin say “this is good”. In most religious contexts, you hear none of these proposals, but instead hear about leaps of faith and so on.
Vernor suggested a principle: The bad beings nearly always optimize for engagement, for pulling you ever deeper into their influence. They want to make themselves more firmly a part of your OODA loop. The good ones send you out, away from themselves in an open ended way, but better than before.
Vernor back then didn’t cite the Olympics, but as I think about torches being passed, and remember his advice, I still see very little wrong with the idea that a key aspect of benevolence involves sending people who seek your aid away from you, such they they are stronger, higher, faster, and more able to learn and improve the world itself, according to their own vision, using power they now own.
Ceteris paribus, inculcating deepening dependence on oneself, in others, is bad. This isn’t my “alignment” insight, but is something I got from Vernor.
I want the bulk of my words, here, to be about the bright light that was Vernor’s natural life, and his art, and his early and helpful and hopeful vision of a future, and not about the tragedy that took him from this world.
However, I also think it would be good and right to talk about the bad thing that took Vernor from us, and how to fix it, and so I have moved the “effortful tribute part of this essay” (a lit review and update on possible future cures for Parkinson’s Disease) to a separate follow-up post that will be longer and hopefully higher quality.
When I read this part of the letter, the authors seem to be throwing it in the face of the board like it is a damning accusation, but actually, as I read it, it seems very prudent and speaks well for the board.
Maybe I’m missing some context, but wouldn’t it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor “aligned with humanity” (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).
This reminds me a lot of a blockchain project I served as an ethicist, which was initially a “project” that was interested in advancing a “movement” and ended up with a bunch of people whose only real goal was to cash big paychecks for a long time (at which point I handled my residual duties to the best of my ability and resigned, with lots of people expressing extreme confusion and asking why I was acting “foolishly” or “incompetently” (except for a tiny number who got angry at me for not causing a BIGGER explosion than just leaving to let a normally venal company be normally venal without me)).
In my case, I had very little formal power. I bitterly regretted not having insisted “as the ethicist” in having a right to be informed of any board meeting >=36 hours in advance, and to attend every one of them, and to have the right to speak at them.
(Maybe it is a continuing flaw of “not thinking I need POWER”, to say that I retrospectively should have had a vote on the Board? But I still don’t actually think I needed a vote. Most of my job was to keep saying things like “lying is bad” or “stealing is wrong” or “fairness is hard to calculate but bad to violate if clear violations of it are occurring” or “we shouldn’t proactively serve states that run gulags, we should prepare defenses, such that they respect us enough to explicitly request compliance first”. You know, the obvious stuff, that people only flinch from endorsing because a small part of each one of us, as a human, is a very narrowly selfish coward by default, and it is normal for us, as humans, to need reminders of context sometimes when we get so much tunnel vision during dramatic moments that we might commit regrettable evils through mere negligence.)
No one ever said that it is narrowly selfishly fun or profitable to be in Gethsemane and say “yes to experiencing pain if the other side who I care about doesn’t also press the ‘cooperate’ button”.
But to have “you said that ending up on the cross was consistent with being a moral leader of a moral organization!” flung on one’s face as an accusation suggests to me that the people making the accusation don’t actually understand that sometimes objective de re altruism hurts.
Maturely good people sometimes act altruistically, at personal cost, anyway because they care about strangers.
Clearly not everyone is “maturely good”.
That’s why we don’t select political leaders at random, if we are wise.
Now you might argue that AI is no big deal, and you might say that getting it wrong could never “kill literally everyone”.
Also it is easy to imagine how a lot of normally venal corporate people (who thought they could get money by lying and saying “AI might kill literally everyone” when they don’t believe it to people who do claim to believe it) if a huge paycheck will be given to them for their moderately skilled work contingent on them saying that...
...but if the stakes are really that big then NOT acting like someone who really DID believe that “AI might kill literally everyone” is much much worse than a lady on the side of the road looking helplessly at her broken car. That’s just one lady! The stakes there are much smaller!
The big things are MORE important to get right. Not LESS important.
To get the “win condition for everyone” would justify taking larger risks and costs than just parking by the side of the road and being late for where-ever you planned on going when you set out on the journey.
Maybe a person could say: “I don’t believe that AI could kill literally everyone, I just think that creating it is just an opportunity to make a lot of money and secure power, and use that to survive the near term liquidation of the proletariate when rambunctious human wage slaves are replaced by properly mind-controlled AI slaves”.
Or you could say something like “I don’t believe that AI is even that big a deal. This is just hype, and the stock valuations are gonna be really big but then they’ll crash and I urgently want to sell into the hype to greater fools because I like money and I don’t mind selling stuff I don’t believe in to other people.”
Whatever. Saying whatever you actually think is one of three legs in a the best definition of integrity that I currently know of.
(The full three criteria: non-impulsiveness, fairness, honesty.)
(Sauce. Italics and bold not in original.)
Compare this again:
The board could just be right about this.
It is an object level question about a fuzzy future conditional event, that ramifies through a lot of choices that a lot of people will make in a lot of different institutional contexts.
If Open AI’s continued existence ensures that artificial intelligence benefits all of humanity then its continued existence would be consistent with the mission.
If not, not.
What is the real fact of the matter here?
Its hard to say, because it is about the future, but one way to figure out what a group will pursue is to look at what they are proud of, and what they SAY they will pursue.
Look at how the people fleeing into Microsoft argue in defense of themselves:
This is all MERE IMPACT. This is just the coolaid that startup founders want all their employees to pretend to believe is the most important thing, because they want employees who work hard for low pay.
This is all just “stuff you’d put in your promo packet to get promoted at a FAANG in the mid teens when they were hiring like crazy, even if it was only 80% true, that ‘everyone around here’ agrees with (because everyone on your team is ALSO going for promo)”.
Their statement didn’t mention “humanity” even once.
Their statement didn’t mention “ensuring” that “benefits” go to “all of humanity” even once.
Microsoft’s management has made no similar promise about benefiting humanity in the formal text of its founding, and gives every indication of having no particular scruples or principles or goals larger than a stock price and maybe some executive bonuses or stock buy-back deals.
As is valid in a capitalist republic! That kind of culture, and that kind of behavior, does have a place in it for private companies that manufacture and sell private good to individuals who can freely choose to buy those products.
You don’t have to be very ethical to make and sell hammers or bananas or toys for children.
However, it is baked into the structure of Microsoft’s legal contracts and culture that it will never purposefully make a public good that it knowingly loses a lot of money on SIMPLY because “the benefits to everyone else (even if Microsoft can’t charge for them) are much much larger”.
Open AI has a clear telos and Microsoft has a clear telos as well.
I admire the former more than the latter, especially for something as important as possibly creating a Demon Lord, or a Digital Leviathan, or “a replacement for nearly all human labor performed via arm’s length transactional relations”, or whatever you want to call it.
There are few situations in normal everyday life where the plausible impacts are not just economic, and not just political, not EVEN “just” evolutionary!
This is one of them. Most complex structures in the solar system right now were created, ultimately, by evolution. After AGI, most complex structures will probably be created by algorithms.
Evolution itself is potentially being overturned.
Software is eating the world.
“People” are part of the world. “Things you care about” are part of the world.
There is no special carveout for cute babies, or picnics, or choirs, or waltzing with friends, or 20th wedding anniversaries, or taking ecstasy at a rave, or ANYTHING HUMAN.
All of those things are in the world, and unless something prevents that natural course of normal events from doing so: software will eventually eat them too.
I don’t see Microsoft and the people fleeing to Microsoft, taking that seriously, with serious language, that endorses coherent moral ideals in ways that can be directly related to the structural features of institutional arrangements to cause good outcomes for humanity on purpose.
Maybe there is a deeper wisdom there?
Maybe they are secretly saying petty things, even as they secretly plan to do something really importantly good for all of humanity?
Most humans are quite venal and foolish, and highly skilled impression management is a skill that politicians and leaders would be silly to ignore.
But it seems reasonable to me to take both sides at their word.
One side talks and walks like a group that is self-sacrificingly willing to do what it takes to ensure that artificial general intelligence benefits all of humanity and the other side is just straightforwardly not.