I haven’t seen you specify what evidence you’re looking for that would resolve your skepticism.
The evidence I am looking for won’t be available until it is too late, that’s the problem. I have a hard time to swallow that pill. I also don’t trust my rationality enough yet to completely overpower my intuition on that subject. Further, I feel that my background knowledge and math skills are not yet sufficient to actually donate larger amounts of money to the Singularity Institute. I am trying to change that right now, I am almost at Calculus over at Khan Academy (after Khan Academy I am going to delve into Bayesian probability).
I’m curious why you think you need calculus to evaluate which charities to donate to. (Though I wholeheartedly approve of learning it).
Surely there’s some evidence that would cause you to update in favor of “SI knows what they’re talking about”, even if we won’t know many things until after a Singularity occurs/fails to occur. For example, I would update pretty dramatically in the direction of “They know what they’re doing” if TImeless Decision Theory went mainstream, since that’s something which seems to be an important accomplishment which I am not qualified to independently evaluate.
I’m curious why you think you need calculus to evaluate which charities to donate to.
I don’t really know what exactly I will need beforehand so I decided to just acquire a general math education. Regarding calculus in particular, in a recent comment someone wrote that you need it to handle a probability distribution.
Surely there’s some evidence that would cause you to update in favor of “SI knows what they’re talking about”...
What evidence would cause me to update in favor of “Otto Rössler knows what he’s talking about regarding risks associated with particle collision experiments”? I have no idea, I don’t even know enough about high energy physics to tell what evidence could convince me one way or the other, let alone judge any evidence. And besides, the math that would be necessary to read papers about high energy physics is ridiculously far above my head. And the same is true for artificial general intelligence, just that it seems orders of magnitude more difficult and that basically nobody knows anything about it.
I would update pretty dramatically in the direction of “They know what they’re doing” if TImeless Decision Theory went mainstream...
That says little about their claims regarding risks from AI in my opinion.
I would imagine that the validity of SI’s claims in one area of research is correlated with the validity of their claims in other, related areas (like decision theory and recursively self-improving AI).
The evidence I am looking for won’t be available until it is too late, that’s the problem. I have a hard time to swallow that pill. I also don’t trust my rationality enough yet to completely overpower my intuition on that subject. Further, I feel that my background knowledge and math skills are not yet sufficient to actually donate larger amounts of money to the Singularity Institute. I am trying to change that right now, I am almost at Calculus over at Khan Academy (after Khan Academy I am going to delve into Bayesian probability).
I’m curious why you think you need calculus to evaluate which charities to donate to. (Though I wholeheartedly approve of learning it).
Surely there’s some evidence that would cause you to update in favor of “SI knows what they’re talking about”, even if we won’t know many things until after a Singularity occurs/fails to occur. For example, I would update pretty dramatically in the direction of “They know what they’re doing” if TImeless Decision Theory went mainstream, since that’s something which seems to be an important accomplishment which I am not qualified to independently evaluate.
I don’t really know what exactly I will need beforehand so I decided to just acquire a general math education. Regarding calculus in particular, in a recent comment someone wrote that you need it to handle a probability distribution.
What evidence would cause me to update in favor of “Otto Rössler knows what he’s talking about regarding risks associated with particle collision experiments”? I have no idea, I don’t even know enough about high energy physics to tell what evidence could convince me one way or the other, let alone judge any evidence. And besides, the math that would be necessary to read papers about high energy physics is ridiculously far above my head. And the same is true for artificial general intelligence, just that it seems orders of magnitude more difficult and that basically nobody knows anything about it.
That says little about their claims regarding risks from AI in my opinion.
I would imagine that the validity of SI’s claims in one area of research is correlated with the validity of their claims in other, related areas (like decision theory and recursively self-improving AI).