(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else’s. For official information, please check the SIAI website.)
Although this may not answer your questions, here are my reasons for supporting SIAI:
I want what they’re selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, “Why?” I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.
It’s the most logical next step. In the evolution of mankind, intelligence is a driving force, so “more intelligent” seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even ”...in space!”
No one else cares about the big picture. (Nick Bostrom and the FHI excepted; if they came out against SIAI, I might change my view.) Every other organization seems to focus on the ‘generic now’, leaving unintended consequences to crush their efforts in the long run, or avoiding the true horrors of the world (pain, age, poverty) due to not even realizing they’re solvable. The ability to predict the future, through knowledge, understanding, and computation power, are the key attributes toward making that future a truly good place. The utility calculations are staggeringly in support of the longest view, such as that provided by SIAI.
It’s the simplest of the ‘good outcome’ possibilities. Everything else seems to depend on magical hand-waving, or an overly simplistic view of how the world works or what a single advance would mean, rather than the way it interacts with all the diverse improvements that happen along side it and how real humans would react to them. Friendly AI provides ‘intelligence-waving’ that seems far more likely to work in a coherent fashion.
I don’t see anything else to give me hope. What else solves all potential problems at the same time, rather than leaving every advancement to be destroyed by that one failure mode you didn’t think of? Of course! Something that can think of those failure modes for you, and avoid them before you even knew they existed.
It’s cheap and easy to do so on a meaningful scale. It’s very easy to make up a large percentage of their budget; I personally provided more than 3 percent of their annual operating costs for this year, and I’m only upper middle class. They also have an extremely low barrier to entry (any amount of US dollars and a stamp, or a credit card, or PayPal).
They’re thinking about the same things I am. They’re providing a tribe like LessWrong, and they’re pushing, trying to expand human knowledge in the ways I think are most important, such as existential risk, humanity’s future, rationality, effective and realistic reversal of pain and suffering, etc.
I don’t think we have much time. The best predictions aren’t very good, but human power has increased to the point where there’s a true threat we’ll destroy ourselves within the next 100 years through means nuclear, biological, nano, AI, wireheading, or nerf the world. Sitting on money and hoping for a better deal, or donating to institutions now that will compound into advancements generations in the future seems like too little, too late.
I still put more money into savings accounts than I give to SIAI. I’m investing in myself and my own knowledge more than the purported future of humanity as they envision. I think it’s very likely SIAI will fail in their mission in every way. They’re just what’s left after a long process of elimination. Give me a better path and I’ll switch my donations. But I don’t see any other group that comes close.
I accept this. Although I’m not sure if the big picture should be a top priority right now. And as I wrote, I’m unable to survey the utility calculations at this point.
It’s the simplest of the ‘good outcome’ possibilities.
So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.
I don’t see anything else to give me hope.
I think you overestimate the friendliness of friendly AI. Too bad Roko’s posts have been censored.
It’s cheap and easy to do so on a meaningful scale.
I want to believe.
They’re thinking about the same things I am.
Beware of those who agree with you?
I don’t think we have much time.
Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don’t have enough time regarding other kinds of threats.
I think it’s very likely SIAI will fail in their mission in every way. They’re just what’s left after a long process of elimination. Give me a better path and I’ll switch my donations. But I don’t see any other group that comes close.
I can accept that. But I’m unable to follow the process of elimination yet.
Who else is working directly on creating smarter-than-human intelligence with non-commercial goals? And if there are any, are they self-reflective enough to recognize its potential failure modes?
No one else cares about the big picture.
I accept this. Although I’m not sure if the big picture should be a top priority right now. And as I wrote, I’m unable to survey the utility calculations at this point.
I used something I developed which I call Point-In-Time Utility to guide my thinking on this matter. It basically boils down to, ‘the longest view wins’, and I don’t see anyone else talking about potentially real pangalactic empires.
It’s the simplest of the ‘good outcome’ possibilities.
So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.
I don’t think it has to be an explosion at all, just smarter-than-human. I’m willing to take things one step at a time, if necessary. Though it seems unlikely we could build a smarter-than-human intelligence without understanding what intelligence is, and thus knowing where to tweak, if even retroactively. That said, I consider intelligence tweaking itself to be a shaky idea, though I view alternatives as failure modes.
I don’t see anything else to give me hope.
I think you overestimate the friendliness of friendly AI. Too bad Roko’s posts have been censored.
I think you overestimate my estimation of the friendliness of friendly AI. Note that at the end of my post I said it is very likely SIAI will fail. My hope total is fairly small. Roko deleted his own posts, and I was able to read the article Eliezer deleted since it was still in my RSS feed. It didn’t change my thinking on the matter; I’d heard arguments like it before.
They’re thinking about the same things I am.
Beware of those who agree with you?
Hi. I’m human. At least, last I checked. I didn’t say all my reasons were purely rational. This one is dangerous (reinforcement), but I do a lot of reading of opposing opinions as well, and there’s still a lot I disagree with regarding SIAI’s positions.
I don’t think we have much time.
Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don’t have enough time regarding other kinds of threats.
The latter is what I’m worried about. I see all of these threats as being developed simultaneously, in a race to see which one passes the threshold into reality first. I’m hoping that Friendly AI beats them.
I think it’s very likely SIAI will fail in their mission in every way. They’re just what’s left after a long process of elimination. Give me a better path and I’ll switch my donations. But I don’t see any other group that comes close.
I can accept that. But I’m unable to follow the process of elimination yet.
I haven’t seen you name any other organization you’re donating to or who might compete with SIAI. Aside from the Future of Humanity Institute or the Lifeboat Foundation, both of which seem more like theoretical study groups than action-takers, people just don’t seem to be working on these problems. Even the Methuselah Foundation is working on a very narrow portion which, although very useful and awesome if it succeeds, doesn’t guard against the threats we’re facing.
I don’t think it has to be an explosion at all, just smarter-than-human.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
I think you overestimate my estimation of the friendliness of friendly AI.
You are right, never mind what I said.
I see all of these threats as being developed simultaneously...
Yeah and how is their combined probability less worrying than that of AI? That doesn’t speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can’t is indeed a promising and appealing idea, given it is feasible.
I haven’t seen you name any other organization you’re donating to or who might compete with SIAI.
I’m mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.
As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.
That there are no other does not mean we shouldn’t be keen to create them, to establish competition.
Absolutely agreed. Though I’m barely motivated enough to click on a PayPal link, so there isn’t much hope of my contributing to that effort. And I’d hope they’d be created in such a way as to expand total funding, rather than cannibalizing SIAI’s efforts.
I’m not sure about this.
Certainly there are other ways to look at value / utility / whatever and how to measure it. That’s why I mentioned I had a particular theory I was applying. I wouldn’t expect you to come to the same conclusions, since I haven’t fully outlined how it works. Sorry.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
I’m not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that’s the theory. There may be no way to save us.
Yeah and how is their combined probability less worrying than that of AI?
AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I’ve read, so does Eliezer, which is why he’s working on that problem instead of, say, nanotech.
I’m mainly concerned about my own well-being.
I’ve mentioned before that I’m somewhat depressed, so I consider my philanthropy to be a good portion ‘lack of caring about self’ more than ‘being concerned about the well-being of all beings’. Again, a subtractive process.
As I said before, it is [...] my intention [...] to steer some critical discussion for us non-expert, uneducated but concerned people.
Thanks! I think that’s probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
I think AI is actually the most dangerous of them...
Why? Because EY told you? I’m not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.
...though I would also appreciate more critical discussion from experts and educated people...
Me too, but I was the only one around willing to start one at this point. That’s the sorry state of critical examination.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
To pick my own metaphor, it’s more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we’re going to create wonderful, cunning, incredibly powerful technology, and I think we’re going to misuse it to destroy ourselves.
Why [is AI the most dangerous threat]?
Because intelligent beings are the most awesome and scary things I’ve ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can’t see us holding back from trying to tweak intelligence itself. I view it as inevitable.
Me too [I also would appreciate more critical discussion from experts]
I’m hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I’m not convinced to be based on firm ground.
(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else’s. For official information, please check the SIAI website.)
Although this may not answer your questions, here are my reasons for supporting SIAI:
I want what they’re selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, “Why?” I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.
It’s the most logical next step. In the evolution of mankind, intelligence is a driving force, so “more intelligent” seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even ”...in space!”
No one else cares about the big picture. (Nick Bostrom and the FHI excepted; if they came out against SIAI, I might change my view.) Every other organization seems to focus on the ‘generic now’, leaving unintended consequences to crush their efforts in the long run, or avoiding the true horrors of the world (pain, age, poverty) due to not even realizing they’re solvable. The ability to predict the future, through knowledge, understanding, and computation power, are the key attributes toward making that future a truly good place. The utility calculations are staggeringly in support of the longest view, such as that provided by SIAI.
It’s the simplest of the ‘good outcome’ possibilities. Everything else seems to depend on magical hand-waving, or an overly simplistic view of how the world works or what a single advance would mean, rather than the way it interacts with all the diverse improvements that happen along side it and how real humans would react to them. Friendly AI provides ‘intelligence-waving’ that seems far more likely to work in a coherent fashion.
I don’t see anything else to give me hope. What else solves all potential problems at the same time, rather than leaving every advancement to be destroyed by that one failure mode you didn’t think of? Of course! Something that can think of those failure modes for you, and avoid them before you even knew they existed.
It’s cheap and easy to do so on a meaningful scale. It’s very easy to make up a large percentage of their budget; I personally provided more than 3 percent of their annual operating costs for this year, and I’m only upper middle class. They also have an extremely low barrier to entry (any amount of US dollars and a stamp, or a credit card, or PayPal).
They’re thinking about the same things I am. They’re providing a tribe like LessWrong, and they’re pushing, trying to expand human knowledge in the ways I think are most important, such as existential risk, humanity’s future, rationality, effective and realistic reversal of pain and suffering, etc.
I don’t think we have much time. The best predictions aren’t very good, but human power has increased to the point where there’s a true threat we’ll destroy ourselves within the next 100 years through means nuclear, biological, nano, AI, wireheading, or nerf the world. Sitting on money and hoping for a better deal, or donating to institutions now that will compound into advancements generations in the future seems like too little, too late.
I still put more money into savings accounts than I give to SIAI. I’m investing in myself and my own knowledge more than the purported future of humanity as they envision. I think it’s very likely SIAI will fail in their mission in every way. They’re just what’s left after a long process of elimination. Give me a better path and I’ll switch my donations. But I don’t see any other group that comes close.
Good, informative comment.
Yeah, that’s why I’m donating as well.
Sure, but why the SIAI?
I accept this. Although I’m not sure if the big picture should be a top priority right now. And as I wrote, I’m unable to survey the utility calculations at this point.
So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.
I think you overestimate the friendliness of friendly AI. Too bad Roko’s posts have been censored.
I want to believe.
Beware of those who agree with you?
Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don’t have enough time regarding other kinds of threats.
I can accept that. But I’m unable to follow the process of elimination yet.
Who else is working directly on creating smarter-than-human intelligence with non-commercial goals? And if there are any, are they self-reflective enough to recognize its potential failure modes?
I used something I developed which I call Point-In-Time Utility to guide my thinking on this matter. It basically boils down to, ‘the longest view wins’, and I don’t see anyone else talking about potentially real pangalactic empires.
I don’t think it has to be an explosion at all, just smarter-than-human. I’m willing to take things one step at a time, if necessary. Though it seems unlikely we could build a smarter-than-human intelligence without understanding what intelligence is, and thus knowing where to tweak, if even retroactively. That said, I consider intelligence tweaking itself to be a shaky idea, though I view alternatives as failure modes.
I think you overestimate my estimation of the friendliness of friendly AI. Note that at the end of my post I said it is very likely SIAI will fail. My hope total is fairly small. Roko deleted his own posts, and I was able to read the article Eliezer deleted since it was still in my RSS feed. It didn’t change my thinking on the matter; I’d heard arguments like it before.
Hi. I’m human. At least, last I checked. I didn’t say all my reasons were purely rational. This one is dangerous (reinforcement), but I do a lot of reading of opposing opinions as well, and there’s still a lot I disagree with regarding SIAI’s positions.
The latter is what I’m worried about. I see all of these threats as being developed simultaneously, in a race to see which one passes the threshold into reality first. I’m hoping that Friendly AI beats them.
I haven’t seen you name any other organization you’re donating to or who might compete with SIAI. Aside from the Future of Humanity Institute or the Lifeboat Foundation, both of which seem more like theoretical study groups than action-takers, people just don’t seem to be working on these problems. Even the Methuselah Foundation is working on a very narrow portion which, although very useful and awesome if it succeeds, doesn’t guard against the threats we’re facing.
That there are no other does not mean we shouldn’t be keen to create them, to establish competition. Or do it at all at this point.
I’m not sure about this.
I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.
You are right, never mind what I said.
Yeah and how is their combined probability less worrying than that of AI? That doesn’t speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can’t is indeed a promising and appealing idea, given it is feasible.
I’m mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.
As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.
Absolutely agreed. Though I’m barely motivated enough to click on a PayPal link, so there isn’t much hope of my contributing to that effort. And I’d hope they’d be created in such a way as to expand total funding, rather than cannibalizing SIAI’s efforts.
Certainly there are other ways to look at value / utility / whatever and how to measure it. That’s why I mentioned I had a particular theory I was applying. I wouldn’t expect you to come to the same conclusions, since I haven’t fully outlined how it works. Sorry.
I’m not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that’s the theory. There may be no way to save us.
AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I’ve read, so does Eliezer, which is why he’s working on that problem instead of, say, nanotech.
I’ve mentioned before that I’m somewhat depressed, so I consider my philanthropy to be a good portion ‘lack of caring about self’ more than ‘being concerned about the well-being of all beings’. Again, a subtractive process.
Thanks! I think that’s probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.
It’s more likely that the Klingon warbird can overpower the USS Enterprise.
Why? Because EY told you? I’m not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.
Me too, but I was the only one around willing to start one at this point. That’s the sorry state of critical examination.
To pick my own metaphor, it’s more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we’re going to create wonderful, cunning, incredibly powerful technology, and I think we’re going to misuse it to destroy ourselves.
Because intelligent beings are the most awesome and scary things I’ve ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can’t see us holding back from trying to tweak intelligence itself. I view it as inevitable.
I’m hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I’m not convinced to be based on firm ground.
I’m not a very good convincer. I’d suggest reading the original material.
Can we get some links up in here? I’m not putting the burden on you in particular, but I think more linkage would be helpful in this discussion.
This thread has Eliezer’s request for specific links, which appear in replies.