[Question] How would the Scaling Hypothesis change things?

The Scaling Hypothesis roughly says that current Deep Learning techniques, given ever more computing power, data, and perhaps some relatively minor improvements, will scale all the way to human-level AI and beyond. Let’s suppose for the sake of argument that the Scaling Hypothesis is correct. How would that change your forecasts or perspectives on anything related to the future of AI?

  • Would your forecasts for AI timelines shorten significantly?

  • Would your forecasts change for the probability of AI-caused global catastrophic /​ existential risks?

  • Would your focus of research or interests change at all?

  • Would it change your general perspective on the current and/​or future of AI?

  • Would it change any forecasts or perspectives of yours in areas that aren’t AI themselves but which might be affected by AI?

  • Would it perhaps even change your perspective on life?