Extremely great to see this type of discussion show up here! I’ve been looking forward to folks posting more stuff in this category.
The usual argument I see is “well, yes, you are absolutely stealing lifeblood from the people you exploit, but it’s worth it to end the horrors of ancient earth faster”. My question for people who agree with that statement is, why are you confident that exploiting the market vulnerability of these folks being underpaid trades off sufficiently to warrant not participating in the FDT group of “people who only make trades that cause all parties to reach baseline human non-suffering”? it seems to me that this “worth it” argument is made of a kind of failure that typifies an unsafe ai: there’s a system-local causal impact on people of the exploiting org; while the vulnerability of the exploited market does mean it’s possible to not fix the negative parts of this causal impact, and while there may be the possibility of turning this into aid for more people later, wouldn’t it be worthwhile to move the needle on ratio of gains-from-trade towards giving enough of the gains to the folks you hire that you cause them to reach baseline non-suffering human experience?
Extremely great to see this type of discussion show up here! I’ve been looking forward to folks posting more stuff in this category.
The usual argument I see is “well, yes, you are absolutely stealing lifeblood from the people you exploit, but it’s worth it to end the horrors of ancient earth faster”. My question for people who agree with that statement is, why are you confident that exploiting the market vulnerability of these folks being underpaid trades off sufficiently to warrant not participating in the FDT group of “people who only make trades that cause all parties to reach baseline human non-suffering”? it seems to me that this “worth it” argument is made of a kind of failure that typifies an unsafe ai: there’s a system-local causal impact on people of the exploiting org; while the vulnerability of the exploited market does mean it’s possible to not fix the negative parts of this causal impact, and while there may be the possibility of turning this into aid for more people later, wouldn’t it be worthwhile to move the needle on ratio of gains-from-trade towards giving enough of the gains to the folks you hire that you cause them to reach baseline non-suffering human experience?