No overthinking AI risk. People, including here get lost in mind loops and complexity.
An easy guide with everything there being a fact:
We DO have evidence that scaling works and models are getting better
We do NOT have evidence that scaling will stall or reach a limit
We DO have evidence that models are becoming smarter in all human ways
We do NOT have evidence of a limit in intelligence that can be reached
We DO have evidence that smarter agents/beings can dominate other agents/beings in nature/history/evolution
We do NOT have evidence that ever a smarter agent/being was controlled by a lesser intelligent agent/being.
Given these easy to understand data points, there is only one conclusion. That AI risk is real, AI risk is NOW.
are they intelligent species with own will?