an endorsement of that ‘negative reinforcement’ strategy by this community?
Only socially I imagine—via Downvotes yes, bombs no.
I’d guess it’s mostly about the belief that blackmail was involved, but there’s only one way to test that.
If for example, this person is employed at a marketing agency that took work from a client who sells nicotine products, his manager will make a strong appeal to his selfishness (‘so what have you been working on?’)
I imagine people react differently to “my work has bad incentives in place, it’s a shame I’m not payed for not doing X” than “I’m looking for a job which doesn’t encourage/involve doing bad things.” (Yes, people demand ‘altruism’ of others.)
a magic formula that will double the number of smokers in the next generation’
… [is] a dangerous idea.
The question is, can this be reversed? Can a formula for reducing the number of smokers be devised instead? Or is the thing you describe just the reverse of this (work on how to reduce harm turned into work on how to increase harm)?
To use the zombie-words example I raised in a previous comment.
Imagine a “human shellcode compiler”, which requires a large amount of processing power and can generate a phrase that a human who hears it will instantly obey, and no countermeasures are available other than ‘not hearing the phrase’. Theoretically, this could have good applications if very carefully controlled (“stop using heroin!”).
Imagine someone runs this to make a command like ‘devour all the living human flesh you can find’. The compiler is salvageable, this particular compiled command is not.
I believe my idea to be closer to the second example than the first, though not nearly to the same level of harm. Based on the qualia computing post linked elsewhere, my most ethical option is ‘be quiet about this one and hope I find a better idea to sell’.
Only socially I imagine—via Downvotes yes, bombs no.
I’d guess it’s mostly about the belief that blackmail was involved, but there’s only one way to test that.
I imagine people react differently to “my work has bad incentives in place, it’s a shame I’m not payed for not doing X” than “I’m looking for a job which doesn’t encourage/involve doing bad things.” (Yes, people demand ‘altruism’ of others.)
The question is, can this be reversed? Can a formula for reducing the number of smokers be devised instead? Or is the thing you describe just the reverse of this (work on how to reduce harm turned into work on how to increase harm)?
To use the zombie-words example I raised in a previous comment.
Imagine a “human shellcode compiler”, which requires a large amount of processing power and can generate a phrase that a human who hears it will instantly obey, and no countermeasures are available other than ‘not hearing the phrase’. Theoretically, this could have good applications if very carefully controlled (“stop using heroin!”).
Imagine someone runs this to make a command like ‘devour all the living human flesh you can find’. The compiler is salvageable, this particular compiled command is not.
I believe my idea to be closer to the second example than the first, though not nearly to the same level of harm. Based on the qualia computing post linked elsewhere, my most ethical option is ‘be quiet about this one and hope I find a better idea to sell’.