Thanks Boaz and Parv for writing these. I think there are a few important details that didn’t get past the information bottleneck that is natural language.
Note: Parv (author of this post) and I are close friends in real life. We work on AIS field building and research together, so my context with him may skew my interpretation of his post and this discussion.
What does being ok mean? I can infer maybe 2 definitions from the discussion. (1) Being ok means “doing well for yourself”, which includes financial security, not being in the hypothesized permanent underclass, and living a fulfilling life in general. (2) Being ok means (1) AND not seeing catastrophic risk materialize (even if it doesn’t impact you as much), which some of us assign intrinsic value to. I think this is what Parv meant by “I did not want the world with these things to end”.
Boaz, I think you’re referring to definition (1) when you say the below right? We likely won’t be okay under definition (2), which is why the emotions imparted by Parv’s piece resonated with so many readers? (Unsure, inviting Parv to comment himself)
“I believe that you will most likely will be OK, and in any case should spend most of your time acting under this assumption.”
However, under either definition, I agree that it is productive to act under the belief “I will be okay if I try my hardest to improve the outcome of AI”
Thanks Boaz and Parv for writing these. I think there are a few important details that didn’t get past the information bottleneck that is natural language.
Note: Parv (author of this post) and I are close friends in real life. We work on AIS field building and research together, so my context with him may skew my interpretation of his post and this discussion.
What does being ok mean? I can infer maybe 2 definitions from the discussion.
(1) Being ok means “doing well for yourself”, which includes financial security, not being in the hypothesized permanent underclass, and living a fulfilling life in general.
(2) Being ok means (1) AND not seeing catastrophic risk materialize (even if it doesn’t impact you as much), which some of us assign intrinsic value to. I think this is what Parv meant by “I did not want the world with these things to end”.
Boaz, I think you’re referring to definition (1) when you say the below right? We likely won’t be okay under definition (2), which is why the emotions imparted by Parv’s piece resonated with so many readers? (Unsure, inviting Parv to comment himself)
However, under either definition, I agree that it is productive to act under the belief “I will be okay if I try my hardest to improve the outcome of AI”