Version 1 (adopted):
Thank you, shminux, for bringing up this important topic, and to all the other members of this forum for their contributions.
I hope that our discussions here will help raise awareness about the potential risks of AI and prevent any negative outcomes. It’s crucial to recognize that the human brain’s positivity bias may not always serve us well when it comes to handling powerful AI technologies.
Based on your comments, it seems like some AI projects could be perceived as potentially dangerous, similar to how snakes or spiders are instinctively seen as threats due to our primate nature. Perhaps, implementing warning systems or detection-behavior mechanisms in AI projects could be beneficial to ensure safety.
In addition to discussing risks, it’s also important to focus on positive projects that can contribute to a better future for humanity. Are there any lesser-known projects, such as improved AI behavior systems or initiatives like ZeroGPT, that we should explore?
Furthermore, what can individuals do to increase the likelihood of positive outcomes for mankind? Should we consider creating closed island ecosystems with the best minds in AI, as Eliezer has suggested? If so, what would be the requirements and implications of such places, including the need for special legislation?
I’m eager to hear your thoughts and insights on these matters. Let’s work together to strive for a future that benefits all of humanity. Thank you for your input!
Version 0:
Thank you shminux for this topic. And other gentlements for this forum!
I hope I will not died with AI in lulz manner after this comment) Human brain need to be positive. Without this it couldn’t work well.
According to your text it looks like any OPEN AI projects buttons could look like SNAKE or SPIDER at least to warning user that there is something danger in it on gene level.
You already know many things about primate nature. So all you need is to use it to get what you want
We have last mind journeey of humankind brains to win GOOD future or take lost!
What other GOOD projects we could focus on?
What projects were already done but noone knows about them? Better AI detect-behaviour systems? ZeroGPT?
What people should do to make higher probability of good scenarios for mankind?
Should we make close island ecosystems with best minds in AI as Eliezar said on Bankless youtube video or not?
What are the requirements for such places? Because then we need to create special legislation for such semiindependant places. It’s possible. But talking with goverments is a hard work. Do you REALLY need it? Or this is just emotional words of Eliezar.
Thank you for answers!
Without big progress
in average in 36-200 year
surviving = 1 - (1 - disaster A) * (1 - disaster B) * (1 - disaster C)
in nowadays interantional equilibrium life will die 100%
but ‘DON’T PANIC’
With AI, quantum computer, fusion, code is law, spaceships and other things we have at least chances to jump into new equilibriums with some kind of diversification (planets, systems etc)
thinking of existential risks in context of AI only is a great danger egoism cause we have many plroblems on different things
AI security need to be build in system manner with all other risks: pandemic, cosmos threats, nuclear, energy etc
We don’t need course ‘Humanity will not die with AI’
We need something like sustainable growth is need for all forms of life, rationality with humans in charge or something similar.
Looks like we need complex models that use all existential risks and we need to use any chances we could find.
Could you give chances of AI threats in one particular year?
And how it rise with years?
If we will got AGI (or she will got us) will we be out of danger?
I mean if we will pass GAI threat will we have new threats?
I mean could good GAI solve other problems?
for example we use only grain of energy from one billions of energy that star Sol give to Earth. why do AI will need some of humans body atoms when it could get all that energy?