Questions for old LW members: how have discussions about AI changed compared to 10+ years ago?

I wasn’t on LessWrong 10 years ago, during the pre-ChatGPT era, so I’d like to hear from people who were here back in the day. I want to know how AI-related discussions have changed and how people’s opinions have changed.

  1. Are you leaning more towards “alignment is easy” or towards “alignment is hard”, compared to many years ago? Is the same true for the community as a whole?

  2. Which ideas and concepts (“pivotal act”, CEV, etc.) that were discussed years ago do you think are still relevant, and which ones do you think are obsolete?
    Is there some topic that was popular 10+ years ago that barely anybody talks about these days?

  3. Are you leaning more towards “creating AGI is easy” or towards “creating AGI is hard”, compared to many years ago? Is the same true for the community as a whole?

  4. What are you most surprised about? To put it another way, is there something about modern AI that you could not have predicted in advance in 2015?
    I am surprised that LLMs can write code yet cannot reliably count the number of words in a short text. Recently, I asked Gemini 2.5 Pro to write a text with precisely 269 words (and even specified that spaces and punctuation don’t count as words), and it gave me a text with 401 words. Of course, there are lots of other examples where LLMs fail in surprising ways, but this is probably the most salient example for me.
    If in 2015 I was trying to predict what AI will look like in 2025, I definitely would not be able to predict that AI won’t be able to count to a few hundred but will be able to write Python code.