The underlying idea of Artificial Intelligences being likely to come from and work in corporate and financial situations, which is likely to lead to a very specific kind of mild friendliness/unfriendliness (i.e. working at goals which benefit some humans and are very unlikely to end the world, but which are not “good altruistic goals” that one would naively associate with a positive singularity) is something that I have thought about for some time and agree with wholeheartedly.
I can’t speak to your views on, for example, quantitative easing. I don’t understand the subject.
I do notice that there were some grammatical errors in the post (“more smarter”) and the formatting is a little odd and not as polished as most posts I see.
I think that, had this post started out with an abstract in the discussion section, it would have received several upvotes and been worth promoting to the main page. As it is it feels in need of a rewrite or two.
The underlying idea of Artificial Intelligences being likely to come from and work in corporate and financial situations, which is likely to lead to a very specific kind of mild friendliness/unfriendliness (i.e. working at goals which benefit some humans and are very unlikely to end the world, but which are not “good altruistic goals” that one would naively associate with a positive singularity) is something that I have thought about for some time and agree with wholeheartedly.
I can’t speak to your views on, for example, quantitative easing. I don’t understand the subject.
I do notice that there were some grammatical errors in the post (“more smarter”) and the formatting is a little odd and not as polished as most posts I see.
I think that, had this post started out with an abstract in the discussion section, it would have received several upvotes and been worth promoting to the main page. As it is it feels in need of a rewrite or two.