Is there any way to view the mode?
Chris_Leong
Feel reply to this comment with any suggestions about other graphs that I should consider including.
Selected Graphics Showing Progress towards AGI
I strongly recommend Zvi’s post on Slack.
Perhaps. I expect there to be massively more donor interest after the CAIS letter, but it didn’t really seem to eventuate.
At it’s worst it can be, but I’d encourage you to reflect on the second quote:
Society would remember the Holocaust differently if there were no survivors to tell the story, but only data, records and photographs. The stories of victims and survivors weave together the numbers to create a truth that is tangible to the human experience…
Great post!
I really appreciate proposals that are both pragmatic and ambitious; and this post is both!
I guess the closest thing there is to a CEA for AI Safety is Kairos. However, they decided to focus explicitly on student groups[1].
- ^
SPAR isn’t limited to students, but it is very much in line with this by providing, “research mentorship for early-career individuals in AI safety”.
- ^
I think he’s clearly had a narrative he wanted to spin and he’s being very defensive here.
If I wanted to steelman his position, I would do so as follows (low-confidence and written fairly quickly):
I expect he believes his framing and that he feels fairly confident in it because most of the people he respects also adopt this framing.
In so far as his own personal views make it into the article, I expect he believes that he’s engaging in a socially acceptable amount of editorializing. In fact, I expect he believes that editorializing the article in this way is more socially responsible than not, likely due to the role of journalism being something along the lines of “critiquing power”.
Further, whilst I expect he wouldn’t universally endorse “being socially acceptable among journalists” as guaranteeing that something is moral, he’d likely defend it as a strongly reliable heuristic, such that it would take pretty strong arguments to justify departing from this.
Whilst he likely endorses some degree of objectivity (in terms of getting facts correct), I expect that he also sees neutrality as overrated by old school journalists. I expect he believes that it limits the ability of jouralists to steer the world towards positive outcomes. That is, more of as a consideration that can be overriden, rather than a rule.
I almost agreed voted this — then read the comments below — and disagreed voted this instead.
Fascinating work. I’m keen to hear more about the belief set of this opposing cluster.
You’re misunderstanding the language game.
Do you think Wiki pages might be less important with LLM’s these days? Also, I just don’t end up on Wiki pages as often, I’m wondering if Google stopped prioritizing it so heavily.
Is there any chance you could define what you mean by “open agency”? Do you essentially mean “distributed agency”?
Placeholder for an experimental art project — Under construction 🚧[1]
Anything can be art, it might just be bad art — Millie Florence
Art in the Age of the Internet
The medium is the message — Marshall McLuhan, Media Theorist
Hypertext is not a technology, it is a way of thinking — ChatGPT 5[2]
𝕯𝖔𝖔𝖒
𝒽𝑜𝓌 𝓉𝑜 𝒷𝑒𝑔𝒾𝓃? 𝓌𝒽𝒶𝓉 𝒶𝒷𝑜𝓊𝓉 𝒶𝓉 𝕿𝖍𝖊 𝕰𝖓𝖉?[5]
𝕿𝖍𝖊 𝕰𝖓𝖉? 𝕚𝕤 𝕚𝕥 𝕣𝕖𝕒𝕝𝕝𝕪 𝕿𝖍𝖊 𝕰𝖓𝖉?
𝓎𝑒𝓈. 𝒾𝓉 𝒾𝓈 𝕿𝖍𝖊 𝕰𝖓𝖉. 𝑜𝓇 𝓂𝒶𝓎𝒷𝑒 𝒯𝒽ℯ 𝐵ℯℊ𝒾𝓃𝓃𝒾𝓃ℊ.
𝓌𝒽𝒶𝓉𝑒𝓋𝑒𝓇 𝓉𝒽𝑒 𝒸𝒶𝓈𝑒, 𝒾𝓉 𝒾𝓈 𝒶𝓃 𝑒𝓃𝒹.[6]Ilya: The AI scientist shaping the world
Now AI is a great thing, because AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty, but it will also create new problems...
The problem of fake new is going to be a million times worse, cyber attacks will become much more extreme, we will have totally automated AI weapons. I think AI has the potential to create infinity stable dictatorships...❦[7] ❦ I feel technology is a force of nature...
Because the way I imagine it is that there is an avalanche, like there is an avalanche of AGI development. Imagine this huge unstoppable force...
And I think it’s pretty likely the entire surface of the earth will be covered with solar panels and data centers.
❦ The future will be good for the AIs regardless, it would be nice if it were good for humans as well
❦ ❦ ❦ Journal
𝗠𝗶𝘁𝗶𝗴𝗮𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗿𝗶𝘀𝗸 𝗼𝗳 𝗲𝘅𝘁𝗶𝗻𝗰𝘁𝗶𝗼𝗻 𝗳𝗿𝗼𝗺 𝗔𝗜 𝘀𝗵𝗼𝘂𝗹𝗱 𝗯𝗲 𝗮 𝗴𝗹𝗼𝗯𝗮𝗹 𝗽𝗿𝗶𝗼𝗿𝗶𝘁𝘆 𝗮𝗹𝗼𝗻𝗴𝘀𝗶𝗱𝗲 𝗼𝘁𝗵𝗲𝗿 𝘀𝗼𝗰𝗶𝗲𝘁𝗮𝗹-𝘀𝗰𝗮𝗹𝗲 𝗿𝗶𝘀𝗸𝘀 𝘀𝘂𝗰𝗵 𝗮𝘀 𝗽𝗮𝗻𝗱𝗲𝗺𝗶𝗰𝘀 𝗮𝗻𝗱 𝗻𝘂𝗰𝗹𝗲𝗮𝗿 𝘄𝗮𝗿.
Geoffry Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Dario Amodei, Bill Gates, Ily Sutskever… There’s No Rule That Says We’ll Make It — Rob Miles
More
MIRI announces new “Death With Dignity” strategy, April 2nd, 2022
Well, let’s be frank here. MIRI didn’t solve AGI alignment and at least knows that it didn’t. Paul Christiano’s incredibly complicated schemes have no chance of working in real life before DeepMind destroys the world. Chris Olah’s transparency work, at current rates of progress, will at best let somebody at DeepMind give a highly speculative warning about how the current set of enormous inscrutable tensors, inside a system that was recompiled three weeks ago and has now been training by gradient descent for 20 days, might possibly be planning to start trying to deceive its operators.
Management will then ask what they’re supposed to do about that.
Whoever detected the warning sign will say that there isn’t anything known they can do about that. Just because you can see the system might be planning to kill you, doesn’t mean that there’s any known way to build a system that won’t do that. Management will then decide not to shut down the project—because it’s not certain that the intention was really there or that the AGI will really follow through, because other AGI projects are hard on their heels, because if all those gloomy prophecies are true then there’s nothing anybody can do about it anyways. Pretty soon that troublesome error signal will vanish.
When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%.
That’s why I would suggest reframing the problem—especially on an emotional level—to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained...
Three Quotes on Transformative Technology
But the moral considerations, Doctor...
Did you and the other scientists not stop to consider the implications of what you were creating? — Roger Robb
When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb— Oppenheimer❦ There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: ’What have we done?… Maybe it’s great, maybe it’s bad, but what have we done? — Sam Altman ❦ Urgent: get collectively wiser—Yoshua Bengio, AI “Godfather” ✒️ Selected Quotes:
We stand at a crucial moment in the history of our species. Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves—severing our entire future and everything we could become.
Yet humanity’s wisdom has grown only falteringly, if at all, and lags dangerously behind. Humanity lacks the maturity, coordination and foresight necessary to avoid making mistakes from which we could never recover. As the gap between our power and our wisdom grows, our future is subject to an ever-increasing level of risk. This situation is unsustainable. So over the next few centuries, humanity will be tested: it will either act decisively to protect itself and its long-term potential, or, in all likelihood, this will be lost forever — Toby Ord, The Precipice
We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology — Edward O. Wilson, The Social Conquest of Earth
❦ Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct — Nick Bostrom, Founder of the Future of Humanity Institute, Superintelligence
❦ If we continue to accumulate only power and not wisdom, we will surely destroy ourselves — Carl Sagan, Pale Blue Dot
Never has humanity had such power over itself, yet nothing ensures that it will be used wisely, particularly when we consider how it is currently being used…There is a tendency to believe that every increase in power means “an increase of ‘progress’ itself ”, an advance in “security, usefulness, welfare and vigour; …an assimilation of new values into the stream of culture”, as if reality, goodness and truth automatically flow from technological and economic power as such. — Pope Francis, Laudato si’
❦ The fundamental test is how wisely we will guide this transformation – how we minimize the risks and maximize the potential for good — António Guterres, Secretary-General of the United Nations
❦ Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins — Stephen Hawking, Brief Answers to the Big Questions
❦ ❤️🔥 Desires
𝓈𝑜𝓂𝑒𝓉𝒾𝓂𝑒𝓈 𝐼 𝒿𝓊𝓈𝓉 𝓌𝒶𝓃𝓉 𝓉𝑜 𝓂𝒶𝓀ℯ 𝒜𝓇𝓉
𝕥𝕙𝕖𝕟 𝕞𝕒𝕜𝕖 𝕚𝕥
𝒷𝓊𝓉 𝓉𝒽ℯ 𝓌𝑜𝓇𝓁𝒹 𝒩𝐸𝐸𝒟𝒮 𝒮𝒶𝓋𝒾𝓃ℊ...
𝕪𝕠𝕦 𝕔𝕒𝕟 𝓈𝒶𝓋ℯ 𝕚𝕥?
𝐼… 𝐼 𝒸𝒶𝓃 𝒯𝓇𝓎...
Effective altruism in the garden of ends No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.
Hope
❦ Scraps
Ilya Sutskever
“It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. ‘Oh, I must have misspoken,’ Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.”
Ilya Sutskever, once widely regarded as perhaps the most brilliant mind at OpenAI, voted in his capacity as a board member last November to remove Sam Altman as CEO. The move was unsuccessful, in part because Sutskever reportedly bowed to pressure from his colleagues and reversed his vote. After those fateful events, Sutskever disappeared from OpenAI’s offices so noticeably that memes began circulating online asking what had happened to him. Finally, in May, Sutskever announced he had stepped down from the company.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
- ^
⇢ Note to self: My previous project had too much meta-commetary and this may have undermined the sincerity, so I should probably try to minimise meta-commentary.
⇢ “You’re going to remove this in the final version, right?” — Maybe. - ^
“But you can’t quote ChatGPT 😠!”—Internet Troll ÷
- ^
“I would say the flaw of Xanadu’s UI was treating transclusion as ‘horizontal’ and side-by-side” — Gwern 🙃
- ^
“StretchText is a hypertext feature that has not gained mass adoption in systems like the World Wide Web… StretchText is similar to outlining, however instead of drilling down lists to greater detail, the current node is replaced with a newer node”—Wikipedia
This ‘stretching’ to increase the amount of writing, or contracting to decrease it gives the feature its name. This is analogous to zooming in to get more detail.
Ted Nelson coined the term c. 1967.
Conceptually, StretchText is similar to existing hypertexts system where a link provides a more descriptive or exhaustive explanation of something, but there is a key difference between a link and a piece of StretchText. A link completely replaces the current piece of hypertext with the destination, whereas StretchText expands or contracts the content in place. Thus, the existing hypertext serves as context.⇢ “This isn’t a proper implementation of StretchText” — Indeed.
- ^
In defence of Natural Language DSLs — Connor Leahy
- ^
Did this conversation really happen? — 穆
- ^
⇢ “Sooner or later, everything old is new again” — Stephen King
⇢ “Therefore if any man be in Christ, he is a new creature: old things are passed away; behold, all things have become new.” — 2 Corinthians 5:17
- ^
Redirect the search?
You mean retarget the search as per John Wentworth’s proposal?
Big actors already have every advantage, why wouldn’t they be able to defend themselves?
I’m worried that the offense-defense balance leans strongly towards the attacker. What are your thoughts here?
I agree, this is the obvious solution… as long as you put your hands in your ears and I shout “I can’t hear you, I can’t hear you” whenever the topic of misuse risks comes up...
Otherwise, there are some quite thorny problem. Maybe you’re ultimately correct about open source being the path forward, but it’s far from obvious.
Three Quotes on Transformative Technology
What is the SUV Triad?
Sorry, this is some content that I had in my short-form Why the focus on wise AI advisors?. The SUV Triad is described there.I was persuaded by Professor David Manly that I didn’t need to argue for Disaster-By-Default in order to justify wise AI advisors and that focusing too much on this aspect would simply cause me to lose people, so I needed somewhere to paste this content.
I just clicked “Remove from Frontpage”. I’m unsure if it does anything for short-form posts though.
Also, the formatting on this is wild, what’s the context for that?
Just experimenting to see what’s possible. Copied it directly from that post, haven’t had time to rethink the formatting yet now that it is its own post. Nowhere near as wild as it gets in the main post though!
Thanks for the suggestions. I reordered the graphs to tell a clearer narrative.