This was also my impression, but I didn’t read much of the actual text of the plan so I figured Zvi knew better than me. But now my Aumann-updating-toward-Zvi and Aumann-updating-toward-Habryka have canceled out and I am back to my initial belief that the plan is bad.
I am also confused by the praise from Dean Ball who apparently worked on this plan. I thought he was pretty x-risk-pilled?
I am also confused by the praise from Dean Ball who apparently worked on this plan. I thought he was pretty x-risk-pilled?
Dean Ball isn’t that x-risk pilled (indeed his engagement with this community has been to argue against concern about x-risk). He does appear to be a pretty reasonable guy who does buy a bunch of arguments that AI could be dangerous in the future. See this post of his which I feel like gives a decent flavor on his perspective: https://www.hyperdimensional.co/p/where-we-are-headed
This was also my impression, but I didn’t read much of the actual text of the plan so I figured Zvi knew better than me. But now my Aumann-updating-toward-Zvi and Aumann-updating-toward-Habryka have canceled out and I am back to my initial belief that the plan is bad.
I am also confused by the praise from Dean Ball who apparently worked on this plan. I thought he was pretty x-risk-pilled?
Dean Ball isn’t that x-risk pilled (indeed his engagement with this community has been to argue against concern about x-risk). He does appear to be a pretty reasonable guy who does buy a bunch of arguments that AI could be dangerous in the future. See this post of his which I feel like gives a decent flavor on his perspective: https://www.hyperdimensional.co/p/where-we-are-headed