As a result, Less Wrong is well positioned to find and correct errors in the public discourse.
Risky. We could perhaps survive some discussion on public policy without any damage. But after a threshold would be crossed we would juts start fracturing into blue or green teams.
However what we need to do is analyze how many recruits we would loose by taking a stand on a certain issue (and even more damaging how many nonrationalists on “our” team we would attract) and compare the utility lost from future less rational behavior compared to what is gained.
Wherever large amounts of utility depend on clear and accurate information, it’s not already prevalent, and we have the ability to produce or properly filter that information, then we ought to do so and lots of utility depends on it.
You confuse having information with convincing other people to believe in your analysis.
Even if it’s incompatible with status signaling, or off topic, or otherwise incompatible with non-vital social norms.
No. Just no.
The only time you can ignore signaling is when your positions just happen to match good signals. You can afford a occasional action that sends bad signals, but things like policy or ideology or even positions need to be couched in nice signals.
Also there is no consensus on what is a vital social norm. I think we can agree that common random killing sprees would perhaps fall into this. But even that in itself if surrounded by the right memetic scaffolding could be made to work (perhaps a “kill the guy you hate day” could work).
Social stability and prosperity is mostly polygenic.
Any talk of social norms is always a compromise between people of different values. The exact point of compromise depends heavily on the cost or benefit it, however a debate about this can never be made without taking values into discussion. And people have an incentive to propose bad policy to the point of outright deception when it is in accordance to their values.
Also there is no consensus on what is a vital social norm. I think we can agree that common random killing sprees would perhaps fall into this. But even that in itself if surrounded by the right memetic scaffolding could be made to work (perhaps a “kill the guy you hate day” could work). Social stability and prosperity is mostly polygenic.
Kill likely to succeed AGI creators who haven’t created a sane goal system (when no other means will work to stop them). Although I know Tim doesn’t accept even that exception.
Me? Yes, those who go on a programmer-killing spree are unlikely to be viewed favourably by me. I don’t think all murder is impossible to justify—but prospective killers would need to be really, really convincing about their motivation in order to avoid being black-balled by the rest of society.
“I have this paranoid fantasy about their mechanical offspring taking over the world”—is the type of thing that would not normally be regarded as adequate justification for killing people.
How many, over the decades, have fallen under “likely to succeed”? e.g. according to scientists/”experts”, investors, project leaders, etc. Whose estimate gets used, anyway?
How many, over the decades, have fallen under “likely to succeed”?
None.
e.g. according to scientists/”experts”, investors, project leaders, etc. Whose estimate gets used, anyway?
Whoever is making the decision. That’s how decisions work. Said person would use whatever information is relevant to them. They will then decide whether they need to take action to prevent the destruction of all things good and light or whether they will take action to prevent someone who they believe to be intending to kill due to paranoia.
Risky. We could perhaps survive some discussion on public policy without any damage. But after a threshold would be crossed we would juts start fracturing into blue or green teams.
However what we need to do is analyze how many recruits we would loose by taking a stand on a certain issue (and even more damaging how many nonrationalists on “our” team we would attract) and compare the utility lost from future less rational behavior compared to what is gained.
You confuse having information with convincing other people to believe in your analysis.
No. Just no.
The only time you can ignore signaling is when your positions just happen to match good signals. You can afford a occasional action that sends bad signals, but things like policy or ideology or even positions need to be couched in nice signals.
Also there is no consensus on what is a vital social norm. I think we can agree that common random killing sprees would perhaps fall into this. But even that in itself if surrounded by the right memetic scaffolding could be made to work (perhaps a “kill the guy you hate day” could work). Social stability and prosperity is mostly polygenic.
Any talk of social norms is always a compromise between people of different values. The exact point of compromise depends heavily on the cost or benefit it, however a debate about this can never be made without taking values into discussion. And people have an incentive to propose bad policy to the point of outright deception when it is in accordance to their values.
Kill likely to succeed AGI creators who haven’t created a sane goal system (when no other means will work to stop them). Although I know Tim doesn’t accept even that exception.
Me? Yes, those who go on a programmer-killing spree are unlikely to be viewed favourably by me. I don’t think all murder is impossible to justify—but prospective killers would need to be really, really convincing about their motivation in order to avoid being black-balled by the rest of society.
“I have this paranoid fantasy about their mechanical offspring taking over the world”—is the type of thing that would not normally be regarded as adequate justification for killing people.
How many, over the decades, have fallen under “likely to succeed”? e.g. according to scientists/”experts”, investors, project leaders, etc. Whose estimate gets used, anyway?
None.
Whoever is making the decision. That’s how decisions work. Said person would use whatever information is relevant to them. They will then decide whether they need to take action to prevent the destruction of all things good and light or whether they will take action to prevent someone who they believe to be intending to kill due to paranoia.