Placeholder for an experimental art project — Under construction 🚧[1]
Anything can be art, it might just be bad art — Millie Florence
Art in the Age of the Internet
The medium is the message — Marshall McLuhan, Media Theorist
Hypertext is not a technology, it is a way of thinking — ChatGPT 5[2]
Writing is the process of reducing a tapestry of interconnections to a narrow sequence. This is, in a sense, illicit. This is a wrongful compression of what should spread out, and today’s computers, they’ve betrayed that — Ted Nelson, founder of Project Xanadu[3][4]
Now AI is a great thing, because AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty, but it will also create new problems...
The problem of fake new is going to be a million times worse, cyber attacks will become much more extreme, we will have totally automated AI weapons. I think AI has the potential to create infinity stable dictatorships...
Well, let’s be frank here. MIRI didn’t solve AGI alignment and at least knows that it didn’t. Paul Christiano’s incredibly complicated schemes have no chance of working in real life before DeepMind destroys the world. Chris Olah’s transparency work, at current rates of progress, will at best let somebody at DeepMind give a highly speculative warning about how the current set of enormous inscrutable tensors, inside a system that was recompiled three weeks ago and has now been training by gradient descent for 20 days, might possibly be planning to start trying to deceive its operators.
Management will then ask what they’re supposed to do about that.
Whoever detected the warning sign will say that there isn’t anything known they can do about that. Just because you can see the system might be planning to kill you, doesn’t mean that there’s any known way to build a system that won’t do that. Management will then decide not to shut down the project—because it’s not certain that the intention was really there or that the AGI will really follow through, because other AGI projects are hard on their heels, because if all those gloomy prophecies are true then there’s nothing anybody can do about it anyways. Pretty soon that troublesome error signal will vanish.
When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%.
That’s why I would suggest reframing the problem—especially on an emotional level—to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained...
Three Quotes on Transformative Technology
But the moral considerations, Doctor...
Did you and the other scientists not stop to consider the implications of what you were creating? — Roger Robb
When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb— Oppenheimer
❦
There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: ’What have we done?… Maybe it’s great, maybe it’s bad, but what have we done? — Sam Altman
❦
Urgent: get collectively wiser—Yoshua Bengio, AI “Godfather”
✒️ Selected Quotes:
We stand at a crucial moment in the history of our species. Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves—severing our entire future and everything we could become.
Yet humanity’s wisdom has grown only falteringly, if at all, and lags dangerously behind. Humanity lacks the maturity, coordination and foresight necessary to avoid making mistakes from which we could never recover. As the gap between our power and our wisdom grows, our future is subject to an ever-increasing level of risk. This situation is unsustainable. So over the next few centuries, humanity will be tested: it will either act decisively to protect itself and its long-term potential, or, in all likelihood, this will be lost forever — Toby Ord, The Precipice
We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology — Edward O. Wilson, The Social Conquest of Earth
❦
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct — Nick Bostrom, Founder of the Future of Humanity Institute, Superintelligence
❦
If we continue to accumulate only power and not wisdom, we will surely destroy ourselves — Carl Sagan, Pale Blue Dot
Never has humanity had such power over itself, yet nothing ensures that it will be used wisely, particularly when we consider how it is currently being used…There is a tendency to believe that every increase in power means “an increase of ‘progress’ itself ”, an advance in “security, usefulness, welfare and vigour; …an assimilation of new values into the stream of culture”, as if reality, goodness and truth automatically flow from technological and economic power as such. — Pope Francis, Laudato si’
❦
The fundamental test is how wisely we will guide this transformation – how we minimize the risks and maximize the potential for good — António Guterres, Secretary-General of the United Nations
❦
Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins — Stephen Hawking, Brief Answers to the Big Questions
No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.
Hope
❦
Scraps
Ilya Sutskever
“It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. ‘Oh, I must have misspoken,’ Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.”
Ilya Sutskever, once widely regarded as perhaps the most brilliant mind at OpenAI, voted in his capacity as a board member last November to remove Sam Altman as CEO. The move was unsuccessful, in part because Sutskever reportedly bowed to pressure from his colleagues and reversed his vote. After those fateful events, Sutskever disappeared from OpenAI’s offices so noticeably that memes began circulating online asking what had happened to him. Finally, in May, Sutskever announced he had stepped down from the company.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
⇢ Note to self: My previous project had too much meta-commetary and this may have undermined the sincerity, so I should probably try to minimise meta-commentary.
⇢ “You’re going to remove this in the final version, right?” — Maybe.
“StretchText is a hypertext feature that has not gained mass adoption in systems like the World Wide Web… StretchText is similar to outlining, however instead of drilling down lists to greater detail, the current node is replaced with a newer node”—Wikipedia
This ‘stretching’ to increase the amount of writing, or contracting to decrease it gives the feature its name. This is analogous to zooming in to get more detail.
Conceptually, StretchText is similar to existing hypertexts system where a link provides a more descriptive or exhaustive explanation of something, but there is a key difference between a link and a piece of StretchText. A link completely replaces the current piece of hypertext with the destination, whereas StretchText expands or contracts the content in place. Thus, the existing hypertext serves as context.
⇢ “This isn’t a proper implementation of StretchText” — Indeed.
Placeholder for an experimental art project — Under construction 🚧[1]
Art in the Age of the Internet
𝕯𝖔𝖔𝖒؟
𝒽𝑜𝓌 𝓉𝑜 𝒷𝑒𝑔𝒾𝓃? 𝓌𝒽𝒶𝓉 𝒶𝒷𝑜𝓊𝓉 𝒶𝓉 𝕿𝖍𝖊 𝕰𝖓𝖉?[5]
𝕿𝖍𝖊 𝕰𝖓𝖉? 𝕚𝕤 𝕚𝕥 𝕣𝕖𝕒𝕝𝕝𝕪 𝕿𝖍𝖊 𝕰𝖓𝖉?
𝓎𝑒𝓈. 𝒾𝓉 𝒾𝓈 𝕿𝖍𝖊 𝕰𝖓𝖉. 𝑜𝓇 𝓂𝒶𝓎𝒷𝑒 𝒯𝒽ℯ 𝐵ℯℊ𝒾𝓃𝓃𝒾𝓃ℊ.
𝓌𝒽𝒶𝓉𝑒𝓋𝑒𝓇 𝓉𝒽𝑒 𝒸𝒶𝓈𝑒, 𝒾𝓉 𝒾𝓈 𝒶𝓃 𝑒𝓃𝒹.[6]
Ilya: The AI scientist shaping the world
Journal
There’s No Rule That Says We’ll Make It — Rob Miles
More
MIRI announces new “Death With Dignity” strategy, April 2nd, 2022
Well, let’s be frank here. MIRI didn’t solve AGI alignment and at least knows that it didn’t. Paul Christiano’s incredibly complicated schemes have no chance of working in real life before DeepMind destroys the world. Chris Olah’s transparency work, at current rates of progress, will at best let somebody at DeepMind give a highly speculative warning about how the current set of enormous inscrutable tensors, inside a system that was recompiled three weeks ago and has now been training by gradient descent for 20 days, might possibly be planning to start trying to deceive its operators.
Management will then ask what they’re supposed to do about that.
Whoever detected the warning sign will say that there isn’t anything known they can do about that. Just because you can see the system might be planning to kill you, doesn’t mean that there’s any known way to build a system that won’t do that. Management will then decide not to shut down the project—because it’s not certain that the intention was really there or that the AGI will really follow through, because other AGI projects are hard on their heels, because if all those gloomy prophecies are true then there’s nothing anybody can do about it anyways. Pretty soon that troublesome error signal will vanish.
When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%.
That’s why I would suggest reframing the problem—especially on an emotional level—to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained...
Three Quotes on Transformative Technology
Did you and the other scientists not stop to consider the implications of what you were creating? — Roger Robb
When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb— Oppenheimer
✒️ Selected Quotes:
We stand at a crucial moment in the history of our species. Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves—severing our entire future and everything we could become.
Yet humanity’s wisdom has grown only falteringly, if at all, and lags dangerously behind. Humanity lacks the maturity, coordination and foresight necessary to avoid making mistakes from which we could never recover. As the gap between our power and our wisdom grows, our future is subject to an ever-increasing level of risk. This situation is unsustainable. So over the next few centuries, humanity will be tested: it will either act decisively to protect itself and its long-term potential, or, in all likelihood, this will be lost forever — Toby Ord, The Precipice
We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology — Edward O. Wilson, The Social Conquest of Earth
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct — Nick Bostrom, Founder of the Future of Humanity Institute, Superintelligence
If we continue to accumulate only power and not wisdom, we will surely destroy ourselves — Carl Sagan, Pale Blue Dot
Never has humanity had such power over itself, yet nothing ensures that it will be used wisely, particularly when we consider how it is currently being used…There is a tendency to believe that every increase in power means “an increase of ‘progress’ itself ”, an advance in “security, usefulness, welfare and vigour; …an assimilation of new values into the stream of culture”, as if reality, goodness and truth automatically flow from technological and economic power as such. — Pope Francis, Laudato si’
The fundamental test is how wisely we will guide this transformation – how we minimize the risks and maximize the potential for good — António Guterres, Secretary-General of the United Nations
Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins — Stephen Hawking, Brief Answers to the Big Questions
❤️🔥 Desires
𝓈𝑜𝓂𝑒𝓉𝒾𝓂𝑒𝓈 𝐼 𝒿𝓊𝓈𝓉 𝓌𝒶𝓃𝓉 𝓉𝑜 𝓂𝒶𝓀ℯ 𝒜𝓇𝓉
𝕥𝕙𝕖𝕟 𝕞𝕒𝕜𝕖 𝕚𝕥
𝒷𝓊𝓉 𝓉𝒽ℯ 𝓌𝑜𝓇𝓁𝒹 𝒩𝐸𝐸𝒟𝒮 𝒮𝒶𝓋𝒾𝓃ℊ...
𝕪𝕠𝕦 𝕔𝕒𝕟 𝓈𝒶𝓋ℯ 𝕚𝕥?
𝐼… 𝐼 𝒸𝒶𝓃 𝒯𝓇𝓎...
Hope
Scraps
Ilya Sutskever
The Optimist, Keach Hagey
Time 100 AI 2024
Twitter
Safe Superintelligence Inc.
⇢ Note to self: My previous project had too much meta-commetary and this may have undermined the sincerity, so I should probably try to minimise meta-commentary.
⇢ “You’re going to remove this in the final version, right?” — Maybe.
“But you can’t quote ChatGPT 😠!”—Internet Troll ÷
“I would say the flaw of Xanadu’s UI was treating transclusion as ‘horizontal’ and side-by-side” — Gwern 🙃
“StretchText is a hypertext feature that has not gained mass adoption in systems like the World Wide Web… StretchText is similar to outlining, however instead of drilling down lists to greater detail, the current node is replaced with a newer node”—Wikipedia
This ‘stretching’ to increase the amount of writing, or contracting to decrease it gives the feature its name. This is analogous to zooming in to get more detail.
Ted Nelson coined the term c. 1967.
Conceptually, StretchText is similar to existing hypertexts system where a link provides a more descriptive or exhaustive explanation of something, but there is a key difference between a link and a piece of StretchText. A link completely replaces the current piece of hypertext with the destination, whereas StretchText expands or contracts the content in place. Thus, the existing hypertext serves as context.
⇢ “This isn’t a proper implementation of StretchText” — Indeed.
In defence of Natural Language DSLs — Connor Leahy
Did this conversation really happen? — 穆
⇢ “Sooner or later, everything old is new again” — Stephen King
⇢ “Therefore if any man be in Christ, he is a new creature: old things are passed away; behold, all things have become new.” — 2 Corinthians 5:17