Cryonics is an underapreciated path in the ea/rationalist communities I think. Since a) we don’t know everything about the human body, b) we cannot predict how future technologies will work and c) we believe AI will rapidly enhance biology then nobody can rule out cryonics having > 0% chance of working. And since there is the option of insurance that makes it ±1.5k per year in total a negligible cost then why isnt it more popular? As someone put it, if you know the place is going down and you are handed a sketchy parachute or no parachute for sure you chose the former.
arisAlexis
The third option in alignment
I think many people silently read this and think “I wish I had a circle or be in a place like this, but in my hometown and the people that life brought me to meet, I am alone in the evening reading lesswrong posts”. I wonder how many of use think like that.
I get frequently accused online or offline about using LLMs to write. I am not and I struggle to get the meaning of this critique. I am used to writing passage titles, conclusions etc. Does it mean my writing is dry? It’s too logical? It sounds cheesy?
are they intelligent species with own will?
No overthinking AI risk. People, including here get lost in mind loops and complexity.
An easy guide with everything there being a fact:
We DO have evidence that scaling works and models are getting better
We do NOT have evidence that scaling will stall or reach a limit
We DO have evidence that models are becoming smarter in all human ways
We do NOT have evidence of a limit in intelligence that can be reached
We DO have evidence that smarter agents/beings can dominate other agents/beings in nature/history/evolution
We do NOT have evidence that ever a smarter agent/being was controlled by a lesser intelligent agent/being.
Given these easy to understand data points, there is only one conclusion. That AI risk is real, AI risk is NOW.
how can you know if it’s exaggerated? It’s like an earthquake. The fact that it didn’t happen yet doesn’t mean that it will not be destructive if it happens through time. The superintelligence slope doesn’t stop somewhere to evaluate nor do we have any kind of signal that the more time passes the more improbable it is.
Let’s discuss for now, and then check in about it in 31 months.
I really don’t like these kind of statements because it’s like a null bet. Either the world has gone to hell and nobody cares about this article or author has “I was correct, told ya” rights. I think these kind of statements should not be made in the context of existential risk.
my criticism is that the article is written in a way that is categorically “correcting for a faulted model” from an outsider. Yes you can suggest corrections of course if there is a blatant mistake. But the assumptions are the most important in these models and assumptions are best done by people that have worked and contributed in the top AI labs.
Although I don’t like comments starting with “your logic slipped” because it sounds passive-aggressive “you are stupid” vibes I will reply.
So what you are saying is that yes this time is different just not today. It will definately happen and all the doomerism is correct but not on a short timeline because ____ insert reasoning that is different than what the top AI minds are saying today.
This is actually and very blatantly a self preserving mechanism that is called “norlmancy bias” very well documented for human species.
another data point is that there are literally no marketing ads showing white male with black female as a couple. Even when racial diversity needs to be shown even at lgbt or racial friendly groups, brochures etc it’s always a black man with a white woman and never vice versa. I guess it’s a chicken and egg problem.
but you need to form this not like any other argument but like “first time in history of earth life, a species has created a new superior species”. I think all these refutals are missing this specific point. This time is different.
I have some familiarity with AI but I am certainly no expert.
Can you explain how can you reconcile this with a huge critique of some top AI experts? What if you are not a domain expert? How dangerous is it to go on a false path? Would a similar comment “I have some familiarity with physics but I am no expert” be good for critique of a fluid dymanics paper?
I think having a huge p(doom) vs a much smaller one would change this article substantially. If you have 20-30 or 50% doom you can still be positive. In all other cases it sounds like a terminal illness. But since the number is subjective living your life like you know you are right is certainly wrong. So I take most of your article and apply it in my daily life and the closest to this is being a stoic but by any means I don’t believe that it would take a miracle for our civilization to survive. It’s more than that and it’s important.
no 2. is much more important than academic ML researchers which is the majority of the surveys done. When someone delivers a product and is the only one building it and they tells you X, you should belive X unless there is a super strong argument for the contrary and there just isn’t.
I am kind of buffled about why this effort would go to be 0 votes or even downvoted? Is the message “the world needs to know or we should discuss about it” wrong or the styling of the document?
good point. I do not personally think that knowing that there is a possibility you will die without you able to do anything to reverse course adds any value unless you mean worldwide social revolt against all nations to stop AI labs?
but how do we get this message accross ? it can reinforce the point of my article that not enough is being done, only in obscure LW forums.
To know or not to know
but you are comparing epochs before and after Turing test passed. Isnt’ that relevant? The Turing test unanimously was/is an inflection point and arguably most experts think we have already passed it in 2023.
The fundamental difference in the bad actor scenario is that the original is someone that wants to rule where the researchers want to be ruled by their AI god.