Nick Bostrom’s paper says that in the long run we should expect extinction, stagnation, posthumanity, or oscillation. But he describes a global state that uses social control technologies (ubiquitous surveillance, lie detectors, advanced education techniques) to maintain a steady technology level without generally superintelligent machines as falling into the radical change category.
What strong bioconservatism needs to work is a global order/singleton that can maintain its values, not necessarily superintelligent software entities.
Also, while I like the post, I wonder if it would work better for your own blog than Less Wrong, since it doesn’t really draw on or develop rationality ideas very much.
Also, while I like the post, I wonder if it would work better for your own blog than Less Wrong, since it doesn’t really draw on or develop rationality ideas very much.
This is an interesting suggestion; but I would claim that if it gets to the stage where a point whose understanding and refinement is crucial to the attainment of our goals—to winning—is not suitable because it isn’t about “rationality”, then I think we have moved away from the true spirit of instrumental rationality.
It seems that what is getting voted up at the moment is mainly generic rationality stuff not future/planning oriented stuff (not that I ever expected my stuff to get voted up).
Generic rationality is maybe the only thing we share and worrying about the future is perhaps only a minority pursuit on LW now.
Also, while I like the post, I wonder if it would work better for your own blog than Less Wrong, since it doesn’t really draw on or develop rationality ideas very much.
I think it is appropriate to have (non-promoted) articles on side topics of interest to large segments of the Less Wrong community.
Nick Bostrom’s paper says that in the long run we should expect extinction, stagnation, posthumanity, or oscillation. But he describes a global state that uses social control technologies (ubiquitous surveillance, lie detectors, advanced education techniques) to maintain a steady technology level without generally superintelligent machines as falling into the radical change category.
What strong bioconservatism needs to work is a global order/singleton that can maintain its values, not necessarily superintelligent software entities.
Also, while I like the post, I wonder if it would work better for your own blog than Less Wrong, since it doesn’t really draw on or develop rationality ideas very much.
This is an interesting suggestion; but I would claim that if it gets to the stage where a point whose understanding and refinement is crucial to the attainment of our goals—to winning—is not suitable because it isn’t about “rationality”, then I think we have moved away from the true spirit of instrumental rationality.
It seems that what is getting voted up at the moment is mainly generic rationality stuff not future/planning oriented stuff (not that I ever expected my stuff to get voted up).
Generic rationality is maybe the only thing we share and worrying about the future is perhaps only a minority pursuit on LW now.
I think it is appropriate to have (non-promoted) articles on side topics of interest to large segments of the Less Wrong community.
Good point about the option on promotion.