Additional factors:
handicaps and komi—the way these are built into the culture mean that it’s a lot easier to have a balanced game with pretty much anyone, without worrying about the weaker player complaining about it not being fair
go can be a lot less aggressive, in that the aim is more to neutralise than kill the enemy—I find that to be nicer in some sense
gobans are prettier than chess boards—fight me :D
I’m in the same situation as you re education status. That being said, my understanding of your 5th point is that nanotechnology doesn’t necessarily mean nanotechnology. It’s more of a placeholder for generic magic technology which can’t be forseen specifically. Like gunpowder or the internet. It seems like this is obvious to you, just wanted to make sure of it.
Gunpowder took a few centuries to totally transform the battlefield, the internet a few decades. Looking at history, there are more and more revolutionary inventions taking shorter and shorter to be developed. So it seems safer to be pessimistic and assume that a new disruptive technology could be invented on really short timescales e.g. some super bacteria via. CRISPR or something. These benefit from the centuries of prior research, standing on shoulders etc. There’s also the fruitfulness of combining domains.
Next, there seems to be an assumption that research scales somehow along with intelligence. Maybe not linearly, but still. This seems somewhat valid—humans having invented a lot more than killer whales, who in turn have invented a lot more than marmots. So if you manage to create something a lot more intelligent (or even just like twice, whatever that means), it seems reasonable to assume that it’s possible for it too have appropriate speed ups in research ability. This of course could be invalidated by your 6th point.
Also, a limiting factor in research can be that you have to run lots of experiments to see if things work out. Simulations can help a lot with this. They don’t even have to be too precise to be useful. So you could imagine an AI that want’s to find a way to kill off humans and looks for something poisonous. It could make a model that classifies molecules by toxicity and then tries to find something [maximally toxic](https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx), after which it could just test the 10 ten candidates.
It’s not a given that any of these assumptions would hold. But if they did, then Bad Things would happen Fast. Which seems like something worth worrying about a lot. I also have the feeling that it depends on what kind of AI is posited.
If it’s just a better Einstein, then it’s unlikely that it’ll manage to kill everyone off too quickly
If it’s a better Einstein, but which thinks 1000 times faster (human brains don’t work all that fast), then we’re in trouble
If it’s properly superhumanly intelligent (i.e. > 400 IQ? dunno?) then who knows what it could come up with. And that’s before considering how fast it thinks.