surviving = 1 - (1 - disaster A) * (1 - disaster B) * (1 - disaster C)
in nowadays interantional equilibrium life will die 100%
but ‘DON’T PANIC’
With AI, quantum computer, fusion, code is law, spaceships and other things we have at least chances to jump into new equilibriums with some kind of diversification (planets, systems etc)
thinking of existential risks in context of AI only is a great danger egoism cause we have many plroblems on different things
AI security need to be build in system manner with all other risks: pandemic, cosmos threats, nuclear, energy etc
We don’t need course ‘Humanity will not die with AI’
We need something like sustainable growth is need for all forms of life, rationality with humans in charge or something similar.
Looks like we need complex models that use all existential risks and we need to use any chances we could find.
Could you give chances of AI threats in one particular year?
And how it rise with years?
If we will got AGI (or she will got us) will we be out of danger?
I mean if we will pass GAI threat will we have new threats?
I mean could good GAI solve other problems?
for example we use only grain of energy from one billions of energy that star Sol give to Earth. why do AI will need some of humans body atoms when it could get all that energy?
Without big progress
in average in 36-200 year
surviving = 1 - (1 - disaster A) * (1 - disaster B) * (1 - disaster C)
in nowadays interantional equilibrium life will die 100%
but ‘DON’T PANIC’
With AI, quantum computer, fusion, code is law, spaceships and other things we have at least chances to jump into new equilibriums with some kind of diversification (planets, systems etc)
thinking of existential risks in context of AI only is a great danger egoism cause we have many plroblems on different things
AI security need to be build in system manner with all other risks: pandemic, cosmos threats, nuclear, energy etc
We don’t need course ‘Humanity will not die with AI’
We need something like sustainable growth is need for all forms of life, rationality with humans in charge or something similar.
Looks like we need complex models that use all existential risks and we need to use any chances we could find.
Could you give chances of AI threats in one particular year?
And how it rise with years?
If we will got AGI (or she will got us) will we be out of danger?
I mean if we will pass GAI threat will we have new threats?
I mean could good GAI solve other problems?
for example we use only grain of energy from one billions of energy that star Sol give to Earth. why do AI will need some of humans body atoms when it could get all that energy?