Given that the process of scientific research has many AGI traits (opaque, self-improving, amoral as a whole), I wonder how rational it is for laypersons to trust it. I suspect the answer is, not very. Primarily because, just like an AGI improving itself, it doesn’t seem to be possible for anyone, not even insiders in the process, to actually guarantee the process will not, in its endless iterations, produce an X-risk. And indeed, said process is the only plausible source of manmade X-risk. This is basically Bostrom’s technological black ball thought experiment in the Vulnerable World Hypothesis. But Bostrom’s proposed solution is to double down, with his panopticon.
I have an intuition that such instances of doubling down are indications the scientific research process itself is misaligned.
I started an AI X-Risk awareness twitter account. Introducing @GoodVibesNoAI. It’s about collating reasons to believe civilization will collapse before it gets to spawn a rogue superintelligence that consumes all matter in the Laniakea supercluster. A good outcome, all things considered.
What do you think about it? Any particular people to follow? Considered also doing a weekly roundup of the articles I post on it and making a weekly newsletter with them.