I’m lazy, so I’ll just copy the content of my earlier comment:
IMO nobody so far has managed to propose an FAI approach that wouldn’t be riddled with serious problems. Almost none of them work if we have a hard takeoff, and a soft takeoff might not be any better, due to allowing lots of different AGIs to compete and leading to [various evolutionary scenarios in which it seems highly unlikely that humans will come out on top]. If there’s a hard takeoff, you need to devote a lot of time and effort into making the design safe and also be the first one to have your AGI undergo a hard takeoff, two mutually incompatible goals. That’s assuming that you even have a clue of what kind of a design would be safe—something CEV-like could qualify as safe, but currently it remains so vaguely specified that it reads more like a list of applause lights than an actual design, and even getting to the point where we could call it a design feels like it requires solving numerous difficult problems, some of which have remained unsolved for thousands of years, and our remaining time might be counted in tens of years rather than thousands or even hundreds… and so on and so on.
Not saying that it’s impossible, but there are far more failure scenarios than successful ones, and an amazing amount of things would all have to go right in order for us to succeed.
Disagree.
I’m lazy, so I’ll just copy the content of my earlier comment: