[Question] Did AI pioneers not worry much about AI risks?

It’s seems note­wor­thy just how lit­tle the AI pi­o­neers from the 40s-80s seemed to care about AI risk. There is no ob­vi­ous rea­son why a book like “Su­per­in­tel­li­gence” wasn’t writ­ten in the 1950s, but for some rea­son that didn’t hap­pen… any thoughts on why this was the case?

I can think of three pos­si­ble rea­sons for this:

1. They ac­tu­ally DID care and pub­lished ex­ten­sively about AI risk, but I’m sim­ply not well enough schooled on the his­tory of AI re­search.

2. Deep down, peo­ple in­volved in early AI re­search knew that they were still a long long way from achiev­ing sig­nifi­cantly pow­er­ful AI, de­spite the op­ti­mistic pub­lic procla­ma­tions that were made at that time.

3. AI risks are highly counter-in­tu­itive and it sim­ply re­quired an­other 60 years of think­ing to un­der­stand.

Any­one have any thoughts on this ques­tion?