What exactly I would advise doing depends on the scale of the money. I am assuming we are talking here about a few million dollars of exposure, not $50M+:
Diversify enough away from AI that you really genuinely know you will be personally fine even if all the AI stuff goes to zero (e.g. probably something like $2M-$3M)
Cultivate at least a few people you talk to about big career decisions who seem multiple steps removed from similarly strong incentives
Make public statements to the effect of being opposed to AI advancing rapidly. This has a few positive effects, I think:
It makes it easier for you to talk about this later when you might end up in a more pressured position (e.g. when you might end up in a position to take actions that might more seriously affect overall AI progress via e.g. work on regulation)
It reduces the degree that you end up in relationships that seem based on false premises because e.g. people assumed you would be in favor of this given your exposure (if you e.g. hold substantial stock in a company)
(To be clear, holding public positions like this isn’t everyone’s jam, and many people prefer holding no positions strongly in public)
See whether you can use your wealth to set up incentives for people to argue with you, or observe people arguing about issues you care about. I like a bunch of the way the S-Process is structured here.
It’s easy to do this in a way that ends up pretty sycophantic. I think Jaan’s stuff has generally not felt very sycophantic, in part for process reasons, and in part because he has selected for non-sycophancy.
I haven’t thought that hard about it, but I wonder whether you could also get some exposure to worlds where AI progress gets relatively suddenly halted as a result of regulation or other forms of public pressure. I can’t immediately think of a great trade on this as trading on events like this is often surprisingly hard to do well, but I can imagine there being something good here.
Related to the second bullet point, a thing a few of my friends do is to have semi-regular “career panels” where they meet with people they trust and who seem like very independent thinkers to them about their career and discuss high-level concerns about what they are doing might turn out bad for the world (as well as other failure modes). This seems pretty good to me, just as a basic social institution.
What exactly I would advise doing depends on the scale of the money. I am assuming we are talking here about a few million dollars of exposure, not $50M+:
Diversify enough away from AI that you really genuinely know you will be personally fine even if all the AI stuff goes to zero (e.g. probably something like $2M-$3M)
Cultivate at least a few people you talk to about big career decisions who seem multiple steps removed from similarly strong incentives
Make public statements to the effect of being opposed to AI advancing rapidly. This has a few positive effects, I think:
It makes it easier for you to talk about this later when you might end up in a more pressured position (e.g. when you might end up in a position to take actions that might more seriously affect overall AI progress via e.g. work on regulation)
It reduces the degree that you end up in relationships that seem based on false premises because e.g. people assumed you would be in favor of this given your exposure (if you e.g. hold substantial stock in a company)
(To be clear, holding public positions like this isn’t everyone’s jam, and many people prefer holding no positions strongly in public)
See whether you can use your wealth to set up incentives for people to argue with you, or observe people arguing about issues you care about. I like a bunch of the way the S-Process is structured here.
It’s easy to do this in a way that ends up pretty sycophantic. I think Jaan’s stuff has generally not felt very sycophantic, in part for process reasons, and in part because he has selected for non-sycophancy.
I haven’t thought that hard about it, but I wonder whether you could also get some exposure to worlds where AI progress gets relatively suddenly halted as a result of regulation or other forms of public pressure. I can’t immediately think of a great trade on this as trading on events like this is often surprisingly hard to do well, but I can imagine there being something good here.
Related to the second bullet point, a thing a few of my friends do is to have semi-regular “career panels” where they meet with people they trust and who seem like very independent thinkers to them about their career and discuss high-level concerns about what they are doing might turn out bad for the world (as well as other failure modes). This seems pretty good to me, just as a basic social institution.