[Question] What have been the major “triumphs” in the field of AI over the last ten years?

Contrary to what seems to be the experience of others, when I’m talking to normies about AI safety, the most common dissenting reaction I get isn’t that they think AI will be controllable, or safe. Convincing them that computers with human-level intelligence won’t have their best interests at heart by default tends to be rather easy.

More often the issue is that AGI seems very far away, and so they don’t think AI safety is particularly important. Even when they say that’s not their sticking point, alerting them to the existence of tools like GPT3 tends to impart a sense of urgency and “realness” to the problem that makes them take it a bit more seriously. There’s this significant qualitative difference in the discussion before and after I show them all of the crazy things OpenAI and DeepMind have built.

I have a general sense that progress has been speeding up, but I’d like to compile a list of relevant highlights. Anybody here willing to help?