I agree with this if you do not have any significant power or influence. As of now I mostly worried about making wrong choices that will really hurt me (and the world) years from now in the best case scenario where I do have a lot of impact.
For instance nick bostroms email which leaked years later. Or lots of moral and epistemically assumptions made by the original EA and Bay Area crowd that still causes people to sometimes make the world a more uncertain place. (I’m not saying uncertainty is bad but it’s very often the case imo that well intentioned people cause harm inside EA)
The actual dangerous idea turned out to be that AGI was a dangerous idea, not any specific thing.
I agree popularing the idea of AGI itself was dangerous, it’s possible if yudkowsky had kept quiet on the extropians mailing list and disappeared to the woods instead , deepmind and OpenAI would not exist today.
My worry is this also applies to literally every other technology on the extropians mailing list or anything similar, be it sulfur geoengineering or nanotech or gene drives or engineering microbiomes or nonlethal incapacitation agents or atleast a 10 other things. I see designing good culture and institutions for all this as massively unsolved.
If you are planning something radical enough to actually get people’s attention (e.g. breaking laws, using violence, fraud of various kinds, etc) then you would want to be a lot more careful who you tell, but also—don’t do that?
Would you consider it breaking the law to train gpt4 on copyrighted text? (Or any of the N number of gpt4 startups also crawling the web right now) What about Satoshi starting cryptocurrency? What about gpt4 email spam? What about writing a doxxing tool? What about starting a dating app to get user databases to obtain favour among authoritarian governments? What about a better translate tool that actually permanently kills all language divides on Earth and alters geopolitics as a result? What about working on improving lie detection? What about distributing libertarian ideologies and ham radios and gunpowder in countries that currently do not allow this?
What about starting a gene drive startup that massively hypes the upside and streamrolls safety people the way Sama did for AGI? Like, it is obvious to me if I wanted to become one of the top 10 powerful people in history, this is the only move, and the only real excuse I have for not doing it is that I am not willing to work that insanely hard. (Plus vague “vibes” that if I don’t maintain complete control and someone else gets control or there’s a geopolitical arms race to stockpile gene drives, the world for me could end up worse than if I had never started the startup at all.)
These are just a bunch of live threads in my mind right now. Bullet645 having researched any one of these ideas for 2 years looks more dangerous than bullet645 just having mentioned them in passing like I did just now.
If you’ve read my comments on this post and still think it’s basically fine to discuss whatever I want low filter with a close circle, I would like to know your reasoning. I’m happy discussing on email as well.
I agree with this if you do not have any significant power or influence. As of now I mostly worried about making wrong choices that will really hurt me (and the world) years from now in the best case scenario where I do have a lot of impact.
For instance nick bostroms email which leaked years later. Or lots of moral and epistemically assumptions made by the original EA and Bay Area crowd that still causes people to sometimes make the world a more uncertain place. (I’m not saying uncertainty is bad but it’s very often the case imo that well intentioned people cause harm inside EA)
I agree popularing the idea of AGI itself was dangerous, it’s possible if yudkowsky had kept quiet on the extropians mailing list and disappeared to the woods instead , deepmind and OpenAI would not exist today.
My worry is this also applies to literally every other technology on the extropians mailing list or anything similar, be it sulfur geoengineering or nanotech or gene drives or engineering microbiomes or nonlethal incapacitation agents or atleast a 10 other things. I see designing good culture and institutions for all this as massively unsolved.
Would you consider it breaking the law to train gpt4 on copyrighted text? (Or any of the N number of gpt4 startups also crawling the web right now) What about Satoshi starting cryptocurrency? What about gpt4 email spam? What about writing a doxxing tool? What about starting a dating app to get user databases to obtain favour among authoritarian governments? What about a better translate tool that actually permanently kills all language divides on Earth and alters geopolitics as a result? What about working on improving lie detection? What about distributing libertarian ideologies and ham radios and gunpowder in countries that currently do not allow this?
What about starting a gene drive startup that massively hypes the upside and streamrolls safety people the way Sama did for AGI? Like, it is obvious to me if I wanted to become one of the top 10 powerful people in history, this is the only move, and the only real excuse I have for not doing it is that I am not willing to work that insanely hard. (Plus vague “vibes” that if I don’t maintain complete control and someone else gets control or there’s a geopolitical arms race to stockpile gene drives, the world for me could end up worse than if I had never started the startup at all.)
These are just a bunch of live threads in my mind right now. Bullet645 having researched any one of these ideas for 2 years looks more dangerous than bullet645 just having mentioned them in passing like I did just now.
If you’ve read my comments on this post and still think it’s basically fine to discuss whatever I want low filter with a close circle, I would like to know your reasoning. I’m happy discussing on email as well.