I came from The Social Alignment Problem where the author wrote “A petition was floated to shut down Bing, which we downvoted into oblivion.”. I jumped in here, saw the title and had the best laugh of my life. Read this “Bing Chat is blatantly, aggressively misaligned”, laughed even more. Love it.
Chris バルス
Would you like to make a case why you believe, say Deepmind, would not produce an AI that poses an x-risk, but a smaller lab would? It’s not intuitive for me why this could be the default case. Is it because we expect smaller labs to have lesser to zero guardrails in place?
Is political polarization the thing which risks it being negative EV? Is the topic of AI x-risk easily polarized? If we could make biological weapons today (instead of way back where the world was different e.g. slower information transfer, lower polarization), and we instead wanted to prevent this, would it be polarized if we put our bio expert on this show? What about nuclear non-proliferation? -- Where the underlying dynamic is somewhat similar: 1. short timelines, 2. risk of extinction.
Surely we can imagine the less conservative wanting AI for the short term piles of gold, so there is the risk of polarization from some parties being more risk accepting and wanting innovation.
A mote of light in an ocean of despair. Just what I needed. Thanks Akash.
It makes sense that we shouldn’t use the lagging indicator of GDP to indicate the leading indicator of intelligence...