For the most part, if you have a reason to share some information, you should share it. For the most part, trying to make a bunch of information boundaries will cripple your ability to do anything useful, and doesn’t avert much bad stuff. Your amazing strategic insights about how we’re all swimming in a sea of hyperstitious memetic warfare and therefore we can control the future by blah blah are usually false, and not actually that big if true because in general things are more in equilibrium than they seem and more driven by forces you’re not controlling than they seem. The more open I am about things I thought I should be cagey about, the more I find no one cares. Unless you’ve got a lot of attention for some reason, roughly no one cares about what you think enough to do much of anything in response to what you think.
There are obvious exceptions, like not sharing other people’s personal info in public or not sharing your garage nuke technology.
Distinguish [trust to not harm you, e.g. by misusing info you’ve shared] from [trust to meet your efforts toward a shared goal]. The latter is generally more important than the former, because lifeforce is a pretty limited resource, so you have to know where to invest yours.
Your amazing strategic insights about how we’re all swimming in a sea of hyperstitious memetic warfare and therefore we can control the future by blah blah are usually false
Is this referring to my insights in particular or something similar somebody else said? My views are more nuanced.
I believe strategic actors often have lots of free choices to make that are not choices forced by the situation, and often actors are unaware of the full action space. But if you manage to get their attention you could get them to copy your preferred choices instead.
Examples of things that narrow your action space: competitors, legality and cultural norms, physical and engineering limitations
(A lot of this unawareness is rational, often their attention and money and time is genuinely best spent elsewhere, so they will copy the default choices. For instance if you’re running a tech SaaS startup with 2 years of runway you’re probably wasting your time if you spend 6 months studying and innovating in UI/UX. But someone like me could go work on some obscure GitHub for 6 months and suddenly all the pay-to-win android games implement payments using one specific UX and not another.)
I also believe a bunch of stuff in the extropians mailing list is high-leverage information. The first really successful BCI startup or gene drive startup will have a lot of free choices to make that will affect everyone else.
Unless you’ve got a lot of attention for some reason
I feel getting attention of people is hard but not necessarily the hardest part of executing something, if you’re agentic and spendt time on it. I’ve already had conversations with multiple billionaires. If tomorrow I decided I want to get 5 minutes of Bill Gates’ attention and were willing to spend a year on this I’d have a good chance of success? I’m unsure of my own opinions here though, and this seems like a crux.
Distinguish [trust to not harm you, e.g. by misusing info you’ve shared] from [trust to meet your efforts toward a shared goal]. The latter is generally more important than the former, because lifeforce is a pretty limited resource, so you have to know where to invest yours.
Thank you, this seems important. One of the problems with a lot of EA discussions is that very small things can move a person from one bracket to another.
Small changes in actions, small changes in epistemic and moral assumptions. (Plus this gets blown up to high scales when dealing with lots of power and attention, two EA billionaires who are 99% the same still wouldn’t necessarily get along.)
That’s why I often wonder whether I would even be able to coordinate with my clones plus minus some epsilon changes.
Example: I’m starting an AGI startup, yay I’m your friend, not investing in safety, whoops enemy again, I start investing a lot in safety, friend again, I invest it in a different branch of alignment research than the one you prefer, neutral, turns out my bet was right and a lot of useful alignment work is produced, friend again, you later have some experiences that turn you negative utilitarian and I maintain my consistent view to not care about this, whoops I’m again kinda neutral to you.
I don’t think this problem is specific to AGI, it’s common to all the really powerful futuristic technologies discussed in EA/rat/transhumanist spaces.
Is this referring to my insights in particular or something similar somebody else said?
It’s meant to gesture at a category of thinking, a given instance of which may or may not be worthwhile or interesting, but which leads people to be very overly worried about the consequences of spreading the ideas involved, compared to how bad the consequences actually are. For example, sometimes [people who take hypothetical possibilities very seriously] newly think of something, such as the potential of BCIs or the potential of thinking in such-and-such unconventional way or whatever. Then they implicitly reason like this: There’s a bunch of potential here; previously I hadn’t thought of this idea; previously I hadn’t pursued efforts related to this idea; now I’ve thought of this idea; the fact that I just now thought of the idea and hadn’t previously explains away the fact that I haven’t previously pursued related efforts; so probably my straightforward inside view of why there’s potential here is correct or at least a good rough draft guess; which means there are huge implications here; and the reason others aren’t pursuing related efforts is probably that they didn’t think of the idea; and since the idea is powerful, I shouldn’t share it.
Usually some but not all of these inferences are correct. Often the neglectedness is mainly because others don’t believe in hypothetical possibilities, not because no one has thought of it. Rarely does the final inference go through.
I’ve already had conversations with multiple billionaires.
I would think the problem here would be failing at transfering the relevant info, not transfering too much info!
But if you manage to get their attention you could get them to copy your preferred choices instead.
The only morally acceptable thing to copy in this way is an orientation against making decision this way.
Hmm I get what you’re saying but my whole claim is yes a good researcher can get the whole inference to go through atleast some of the time.
Maybe we need to discuss actual examples.
I would think the problem here would be failing at transfering the relevant info, not transfering too much info!
I agree the first problem is hard. My bigger worry is the second problem—transferring wrong info rather than too much.
For instance I might write an article titled “3 types of BCIs and 50 cool things you can do with them”, 3 years later I realise “holy shit some of those things I thought were cool could actually hurt lots of people (but provide gain to the investor/founder)” but now it’s too late because some founder of a BCI startup has already read my first article and is inspired by it now.
The only morally acceptable thing to copy in this way is an orientation against making decision this way.
This seems weirdly adversarial, maybe I didn’t communicate my point well..You use a toothbrush somebody else designed, you live in a home someone else designed, you use telephone calls and telephone numbers using social technology someone else figured out, you work a 40 hour work week because someone decided creating a law against overwork was a good idea, etc.
I could go talk to a toothbrush manufacturer and show them a cheaper polymer or better design and it could affect which brush you use for example. I might not even have to talk to the same manufacturer you buy from, since manufacturers also will all copy each other once one of them has something cool.
This also applies to thoughts, if I find a superior (or even just different) way of thinking about economics or market research or life philosophy or how best to tie your shoelaces, you might start thinking in patterns similar to mine once lots of people copy my thought pattern.
The examples in this comment are about “oops I had an idea that sounds good but is accidentally bad”. That’s a reasonable thing to worry about but doesn’t seem like the thing you were actually asking about. You wrote:
I don’t expect to be particularly good at coordinating with my perfect clones for example. I’m sure if you put me in a room with my perfect clone and a source of massive power (such as a controllable ASI), we’d beat each other half to death fighting for it.
This seems much more central, and indicates a major problem.
I’ve been confused around why I find it so hard to trust people and this discussion has made me a little less confused. There seem to be multiple reasons. Thank you for discussing so far.
I agree that that seems to be the biggest problem—even if someone shared all my beliefs and values—I would struggle to coordinate with them right now.
I am also dealing with a bunch of painful personal shit right now that might be affecting my ability to trust people or lead a happy/meaningful life. I don’t want to share too much about that on a public forum. (It could actually fuck up my life if I did.)
I know the standard advice is to go fix my personal shit before I think about the future of the world, but at some point I do need to figure out who to trust or not, and it’s going to have implications for both my personal and professional life, I can’t just cleanly separate the two.
For the most part, if you have a reason to share some information, you should share it. For the most part, trying to make a bunch of information boundaries will cripple your ability to do anything useful, and doesn’t avert much bad stuff. Your amazing strategic insights about how we’re all swimming in a sea of hyperstitious memetic warfare and therefore we can control the future by blah blah are usually false, and not actually that big if true because in general things are more in equilibrium than they seem and more driven by forces you’re not controlling than they seem. The more open I am about things I thought I should be cagey about, the more I find no one cares. Unless you’ve got a lot of attention for some reason, roughly no one cares about what you think enough to do much of anything in response to what you think.
There are obvious exceptions, like not sharing other people’s personal info in public or not sharing your garage nuke technology.
Distinguish [trust to not harm you, e.g. by misusing info you’ve shared] from [trust to meet your efforts toward a shared goal]. The latter is generally more important than the former, because lifeforce is a pretty limited resource, so you have to know where to invest yours.
Thank you for taking time to reply.
Is this referring to my insights in particular or something similar somebody else said? My views are more nuanced.
I believe strategic actors often have lots of free choices to make that are not choices forced by the situation, and often actors are unaware of the full action space. But if you manage to get their attention you could get them to copy your preferred choices instead.
Examples of things that narrow your action space: competitors, legality and cultural norms, physical and engineering limitations
(A lot of this unawareness is rational, often their attention and money and time is genuinely best spent elsewhere, so they will copy the default choices. For instance if you’re running a tech SaaS startup with 2 years of runway you’re probably wasting your time if you spend 6 months studying and innovating in UI/UX. But someone like me could go work on some obscure GitHub for 6 months and suddenly all the pay-to-win android games implement payments using one specific UX and not another.)
I also believe a bunch of stuff in the extropians mailing list is high-leverage information. The first really successful BCI startup or gene drive startup will have a lot of free choices to make that will affect everyone else.
I feel getting attention of people is hard but not necessarily the hardest part of executing something, if you’re agentic and spendt time on it. I’ve already had conversations with multiple billionaires. If tomorrow I decided I want to get 5 minutes of Bill Gates’ attention and were willing to spend a year on this I’d have a good chance of success? I’m unsure of my own opinions here though, and this seems like a crux.
Thank you, this seems important. One of the problems with a lot of EA discussions is that very small things can move a person from one bracket to another.
Small changes in actions, small changes in epistemic and moral assumptions. (Plus this gets blown up to high scales when dealing with lots of power and attention, two EA billionaires who are 99% the same still wouldn’t necessarily get along.)
That’s why I often wonder whether I would even be able to coordinate with my clones plus minus some epsilon changes.
Example: I’m starting an AGI startup, yay I’m your friend, not investing in safety, whoops enemy again, I start investing a lot in safety, friend again, I invest it in a different branch of alignment research than the one you prefer, neutral, turns out my bet was right and a lot of useful alignment work is produced, friend again, you later have some experiences that turn you negative utilitarian and I maintain my consistent view to not care about this, whoops I’m again kinda neutral to you.
I don’t think this problem is specific to AGI, it’s common to all the really powerful futuristic technologies discussed in EA/rat/transhumanist spaces.
——
I’m curious what you think about all this.
It’s meant to gesture at a category of thinking, a given instance of which may or may not be worthwhile or interesting, but which leads people to be very overly worried about the consequences of spreading the ideas involved, compared to how bad the consequences actually are. For example, sometimes [people who take hypothetical possibilities very seriously] newly think of something, such as the potential of BCIs or the potential of thinking in such-and-such unconventional way or whatever. Then they implicitly reason like this: There’s a bunch of potential here; previously I hadn’t thought of this idea; previously I hadn’t pursued efforts related to this idea; now I’ve thought of this idea; the fact that I just now thought of the idea and hadn’t previously explains away the fact that I haven’t previously pursued related efforts; so probably my straightforward inside view of why there’s potential here is correct or at least a good rough draft guess; which means there are huge implications here; and the reason others aren’t pursuing related efforts is probably that they didn’t think of the idea; and since the idea is powerful, I shouldn’t share it.
Usually some but not all of these inferences are correct. Often the neglectedness is mainly because others don’t believe in hypothetical possibilities, not because no one has thought of it. Rarely does the final inference go through.
I would think the problem here would be failing at transfering the relevant info, not transfering too much info!
The only morally acceptable thing to copy in this way is an orientation against making decision this way.
Hmm I get what you’re saying but my whole claim is yes a good researcher can get the whole inference to go through atleast some of the time.
Maybe we need to discuss actual examples.
I agree the first problem is hard. My bigger worry is the second problem—transferring wrong info rather than too much.
For instance I might write an article titled “3 types of BCIs and 50 cool things you can do with them”, 3 years later I realise “holy shit some of those things I thought were cool could actually hurt lots of people (but provide gain to the investor/founder)” but now it’s too late because some founder of a BCI startup has already read my first article and is inspired by it now.
This seems weirdly adversarial, maybe I didn’t communicate my point well..You use a toothbrush somebody else designed, you live in a home someone else designed, you use telephone calls and telephone numbers using social technology someone else figured out, you work a 40 hour work week because someone decided creating a law against overwork was a good idea, etc.
I could go talk to a toothbrush manufacturer and show them a cheaper polymer or better design and it could affect which brush you use for example. I might not even have to talk to the same manufacturer you buy from, since manufacturers also will all copy each other once one of them has something cool.
This also applies to thoughts, if I find a superior (or even just different) way of thinking about economics or market research or life philosophy or how best to tie your shoelaces, you might start thinking in patterns similar to mine once lots of people copy my thought pattern.
The examples in this comment are about “oops I had an idea that sounds good but is accidentally bad”. That’s a reasonable thing to worry about but doesn’t seem like the thing you were actually asking about. You wrote:
This seems much more central, and indicates a major problem.
You are right.
I’ve been confused around why I find it so hard to trust people and this discussion has made me a little less confused. There seem to be multiple reasons. Thank you for discussing so far.
I agree that that seems to be the biggest problem—even if someone shared all my beliefs and values—I would struggle to coordinate with them right now.
I am also dealing with a bunch of painful personal shit right now that might be affecting my ability to trust people or lead a happy/meaningful life. I don’t want to share too much about that on a public forum. (It could actually fuck up my life if I did.)
I know the standard advice is to go fix my personal shit before I think about the future of the world, but at some point I do need to figure out who to trust or not, and it’s going to have implications for both my personal and professional life, I can’t just cleanly separate the two.