Without big progress
in average in 36-200 year
surviving = 1 - (1 - disaster A) * (1 - disaster B) * (1 - disaster C)
in nowadays interantional equilibrium life will die 100%
but ‘DON’T PANIC’
With AI, quantum computer, fusion, code is law, spaceships and other things we have at least chances to jump into new equilibriums with some kind of diversification (planets, systems etc)
thinking of existential risks in context of AI only is a great danger egoism cause we have many plroblems on different things
AI security need to be build in system manner with all other risks: pandemic, cosmos threats, nuclear, energy etc
We don’t need course ‘Humanity will not die with AI’
We need something like sustainable growth is need for all forms of life, rationality with humans in charge or something similar.
Looks like we need complex models that use all existential risks and we need to use any chances we could find.
Could you give chances of AI threats in one particular year?
And how it rise with years?
If we will got AGI (or she will got us) will we be out of danger?
I mean if we will pass GAI threat will we have new threats?
I mean could good GAI solve other problems?
for example we use only grain of energy from one billions of energy that star Sol give to Earth. why do AI will need some of humans body atoms when it could get all that energy?
Good day!
I fully share the views expressed in your article. Indeed, the ideal solution would be to delete many of the existing materials and to reformat the remaining ones into a format understandable to every novice programmer, transhumanist, or even an average person.
As a poker player and a lawyer assisting consumers who have suffered from the consequences of artificial intelligence, as well as someone interested in cryptocurrencies and existential risks, I first invested in Eliezer Yudkowsky’s ideas many years ago. At that time, I saw how generative-predictive models easily outplayed poker players, and I wondered whether it was possible to counteract this. Since then, I have not seen a single serious security study conducted by not the players themselves, but any non-response system up question could it research even self data
and in the realm of cryptocurrencies, money continues to be stolen with the help of AI, with no help or refund in sight.
I see prediction we have already lost the battle against GAI, but in the next 12 years, we have a chance to make the situation a bit better. To create conditions of the game where this player or his precursor (AI-users) will have more aligned (lawful good) elements.
It seems that very intelligent also very stubborn, see no doubts in position, such high IQs are very dangerous. Think they are right about everything, that understood it all, but we are just few perspectives in a vast, incomprehensible world where we understand nothing. We all wrong.
Yes, you’re probably a couple of sigmas smarter than the median person, but you need to convince exactly such a person, the median, or even dumper on a couple of IQ sigmas not to launch anything. It’s not just OpenAI developing GAI,
others are too, make research, decisions but they might not even know who Eliezer Yudkowsky is or what the lesswrong website is. They might visit pepper copy of the site, see that it’s clear we shouldn’t let GAI emerge, think about graphic boards, and where there are many graphic boards, in decentralized mining, they might decide to take control of them.
If we’re lucky, their master slaves will just steal them and use them for mining, and everything will be fine then.
But various research like changing the sign of a function and creating something dangerous, that’s better removed.
Another strange thing is the super-ethical laws for Europe and the US. A lot of jurisdictions. Even convention of cybercrime not universal. And in universal jurisdiction cybercrimes there is no crimes about existential risks. So many of international media laws just declarations without real procedures without any real power
Many laws aren’t adhered to in practice, there are different kinds of people, for some, the criminal code is like a menu, and if you don’t have to pay for that menu, it’s doubly bad
There are individualists, and among transhumanists, I’m sure there are many who would choose their life and the life of a close million over the rest of humanity. And that’s not good, unfair. System should be for all billions of people
But there are also those in the world who, if presented with a “shut down server” button, will eventually press it. There are many such buttons in various fields worldwide. If we take predictions for a hundred years, unless something radically changes, the likelihood of “server shutdown” approaches 1.
So it’s interesting whether through GAI open source or any other framework or model, we could create some universal platform with a rule system that on one hand does universal monitoring of all existential problems, but also provides clear, beneficial instructions for the median voter, as well as for the median worker and their masters.
Culture is created by the spoon. Give us a normal, unified system that encourages correct behavior for adhering to existential risks, since you’ve won the genetic and event lottery by intelligence and were born with high IQ and social skills.
Usually, the median person is interested in: jobs, a full fridge, rituals, culture, the spread of their opinion leader’s information, dopamine, political and other random and inherited values, life, continuation of life, and the like.
Provide a universal way of obtaining this and just monitor it calmly. And it touched on the problem of all existential risks: ecology, physics, pandemics, volcanic activity, space, nanobots, atom.
Doomclock 23:55 is not only because of the GAI risk, what selfishness.
Sometimes it seems that Yudkowsky is the Girolamo Savonarola of our days. And the system of procedures that Institute of Future Life and Eliezar already invented, their execution is important!
Sadly in humanity now it’s profitable to act, and then ask for forgiveness. So many businesses are built the same as nowadays Binance without responsibility, ‘don’t FUD just build’, same way work all AI and others powerful startups. Many experimental researches not 100% sure that they are safe for planet. In 20th and 21th centuary it’s became normal. But it shouldn’t.
And these real condition of problem, real pattern of life. And yet in crypto, there are many graphics cards, collected in decentralized networks, and they gather in large decentralized, unturnoffable nodes and clusters. Are they danger?
We need systems of cheap protection, brakes, and incentives for their use! And like with seat belts, teach from childhood. Something even simpler than Khan Academy. HPMOR was great. Do we have anything for next Generations? That didn’t see or like Harry Potter? What is it? To explain problem.
Laws and rules just for show, unenforceable, are only harmful. Since ancient times it’s known that any rules consist of three things: hypothesis, disposition, and sanction. Without powerful procedural law, all these material legal norms are worthless, more precisely, a boon for the malefactor. If we don’t procedurally protect people from wrongful AI, introducing soothing, non-working ethical rules will only increase volatility and the likelihood of wrongful AI, his advantage, even if we are lucky to have its element (it’s alighment) in principle.
I apologize if there were any offensive remarks in the text or if it seemed like an unstructured rant expressing incorrect thoughts, that how my brain work. Hope I wrong, point please. Thank you for any comments and for your attention!