With this level of funding, we would be able to begin building an entirely new AI alignment research team working in parallel to our current team, working on different problems and taking a different approach. Our current technical agenda is not the only way to approach the challenges that lie ahead, and we would be thrilled to get the opportunity to spark a second research group.
Hi Nate, can you briefly describe this second approach? (Not that I have $6M, but I’m curious what other FAI approach MIRI considers promising.)
On another note, do you know anything about Elon Musk possibly having changed his mind about the threat of AI and how that might affect future funding of work in this area? From this report of a panel discussion at ICML 2015:
Apparently Hassabis of DeepMind has been at the core of recent AI fear from prominent figures such as Elon Musk, Stephen Hawking and Bill Gates. Hassabis introduced AI to Musk, which may have alarmed him. However, in recent months, Hassabis has convinced Musk, and also had a three-hour-long chat with Hawking about this. According to him, Hawking is less worried now. However, he emphasized that we must be ready, not fear, for the future.
Hi Nate, can you briefly describe this second approach?
Yep! This is a question we’ve gotten a few times already, and the answer will likely appear in a blog post later in the fundraiser. In the interim, the short version is that there are a few different promising candidates for a second approach, and we haven’t settled yet on exactly which would be next in line. (This is one of the reasons why our plans extend beyond $6M.) I can say that the new candidates would still be aimed towards ensuring that the creation of human-programmed AGI goes well—the other pathways (whole-brain emulation, etc.) are very important, but they aren’t within our purview. It’s not clear yet whether we’d focus on new direct approaches to the technical problems (such as, e.g., Paul Christiano’s “can we reduce this problem to reliable predictions about human behavior” approach) or whether we’d focus on projects that would be particularly exciting to modern AI professionals or modern security professionals, in attempts to build stronger bridges to academia.
In fact, I’d actually be quite curious about which approaches you think are the most promising before deciding.
On another note, do you know anything about Elon Musk possibly having changed his mind about the threat of AI and how that might affect future funding of work in this area?
I wasn’t at the ICML workshop, so I can’t say much about how that summary was meant to be interpreted. That said, I wouldn’t read too much into it: “Hassabis has convinced Musk” doesn’t tell us much about what Demis claimed. Best I can guess from the context is that he said he convinced Elon that overhyping concern about AI could be harmful, but it’s hard to be sure.
I can say, however, that I’m in contact with both Elon and Demis, and that I’m not currently worried about Elon disappearing into the mist :-)
In fact, I’d actually be quite curious about which approaches you think are the most promising before deciding.
Thanks for asking! Like severalother LWers, I think that with the rapid advances in ANN-based AI, it looks like pretty clear sailing for artificial neural networks to become the first form of AGI. With the recent FLI grants there are now a number of teams working on machine learning of human values and making neural networks safer, but nobody so far is taking the long view of asking what happens when an ANN-based AGI becomes very powerful but doesn’t exactly share our values or philosophical views. It would be great if there was a team working on those longer term problems, like how to deal with inevitable differences in what values an AGI has learned and what our actual values are, and understanding metaphilosophy enough to be able to teach an ANN-based AGI how to “do philosophy”.
I can say, however, that I’m in contact with both Elon and Demis, and that I’m not currently worried about Elon disappearing into the mist :-)
That’s good. :) BTW, are you familiar with Demis’s views? From various news articles quoting him, he comes across as quite complacent but I wonder if that’s a mistaken impression or if he has different private views.
Thanks for asking! Like several other LWers, I think that with the rapid advances in ANN-based AI, it looks like pretty
clear sailing for artificial neural networks to become the first form of AGI.
That paragraph almost makes sense, but it seems to be missing a key sentence or two. Hassabis is “at the core of recent AI fear” and introduced AI to Musk, but then Hassabis changed his mind and proceeded to undo his previous influence? Its hard to imagine those talks—“Oh yeah you know this whole AI risk thing I got you worried about? I was wrong, it’s no big deal now.”
It seems more likely to me that Hassabis said something like “with things as they stand now, a bad end seems most likely.” They start to take the fear seriously, act on it, and then talk to Hassabis again, and he says “with things as they stand now, a bad end seems likely to be avoided.”
In particular, we seem to have moved from a state where AI risk needed more publicity to a state where AI risk has the correct amount of publicity, and more might be actively harmful.
Hi Nate, can you briefly describe this second approach? (Not that I have $6M, but I’m curious what other FAI approach MIRI considers promising.)
On another note, do you know anything about Elon Musk possibly having changed his mind about the threat of AI and how that might affect future funding of work in this area? From this report of a panel discussion at ICML 2015:
Yep! This is a question we’ve gotten a few times already, and the answer will likely appear in a blog post later in the fundraiser. In the interim, the short version is that there are a few different promising candidates for a second approach, and we haven’t settled yet on exactly which would be next in line. (This is one of the reasons why our plans extend beyond $6M.) I can say that the new candidates would still be aimed towards ensuring that the creation of human-programmed AGI goes well—the other pathways (whole-brain emulation, etc.) are very important, but they aren’t within our purview. It’s not clear yet whether we’d focus on new direct approaches to the technical problems (such as, e.g., Paul Christiano’s “can we reduce this problem to reliable predictions about human behavior” approach) or whether we’d focus on projects that would be particularly exciting to modern AI professionals or modern security professionals, in attempts to build stronger bridges to academia.
In fact, I’d actually be quite curious about which approaches you think are the most promising before deciding.
I wasn’t at the ICML workshop, so I can’t say much about how that summary was meant to be interpreted. That said, I wouldn’t read too much into it: “Hassabis has convinced Musk” doesn’t tell us much about what Demis claimed. Best I can guess from the context is that he said he convinced Elon that overhyping concern about AI could be harmful, but it’s hard to be sure.
I can say, however, that I’m in contact with both Elon and Demis, and that I’m not currently worried about Elon disappearing into the mist :-)
Thanks for asking! Like several other LWers, I think that with the rapid advances in ANN-based AI, it looks like pretty clear sailing for artificial neural networks to become the first form of AGI. With the recent FLI grants there are now a number of teams working on machine learning of human values and making neural networks safer, but nobody so far is taking the long view of asking what happens when an ANN-based AGI becomes very powerful but doesn’t exactly share our values or philosophical views. It would be great if there was a team working on those longer term problems, like how to deal with inevitable differences in what values an AGI has learned and what our actual values are, and understanding metaphilosophy enough to be able to teach an ANN-based AGI how to “do philosophy”.
That’s good. :) BTW, are you familiar with Demis’s views? From various news articles quoting him, he comes across as quite complacent but I wonder if that’s a mistaken impression or if he has different private views.
I am getting the most uncanny sense of deja vu.
That paragraph almost makes sense, but it seems to be missing a key sentence or two. Hassabis is “at the core of recent AI fear” and introduced AI to Musk, but then Hassabis changed his mind and proceeded to undo his previous influence? Its hard to imagine those talks—“Oh yeah you know this whole AI risk thing I got you worried about? I was wrong, it’s no big deal now.”
It seems more likely to me that Hassabis said something like “with things as they stand now, a bad end seems most likely.” They start to take the fear seriously, act on it, and then talk to Hassabis again, and he says “with things as they stand now, a bad end seems likely to be avoided.”
In particular, we seem to have moved from a state where AI risk needed more publicity to a state where AI risk has the correct amount of publicity, and more might be actively harmful.