As a participant, I probably don’t fit the “typical” AISC profile: I’m a writer, not a researcher (even though I’ve got a Ph.D. in symbolic AI), I’m at the end of my career, not the beginning (I’m 61). That I’m part of AISC is due to the fact that this time, there was a “non-serious” topic included in the camp’s agenda: Designing an alignment tabletop role-playing game (based on an idea by Daniel Kokotajlo). Is this a good thing?
For me, it certainly was. I came to AISC mostly to learn and get connections into the AI alignment community, and this worked very well. I feel like I know a lot less about alignment than I thought I knew at the start of the camp, which is a sure sign that I learned a lot. And I made a lot of great and inspiring contacts, even friendships, some of which I think will stay long after the camp is over. So I’m extremely happy and grateful that I had the opportunity to participate.
But what use am I to AI alignment? Well, together with another participant, Jan Kirchner, I did try to contribute an idea, but I’m not sure how helpful that is. However, one thing I can do: As a writer, I can try to raise awareness for the problem. That is the reason I participated in the first place. I see a huuuuuge gap between the importance and urgency of AI alignment and the attention it gets outside the community, among people who probably could do something about it, e.g. politicians and “established” scientists. For example, in Germany, we have the “Institut für Technikfolgenabschätzung” (ITAS) which claims on its website to be the leading institute for technology assessment. I asked them whether they are working on AI alignment. Apparently, they aren’t even aware that there IS a problem. The same seems to be true for the scientific establishment in the rest of Germany and the EU.
You may question how helpful it is to get people like them to work on alignment. But I think that if we hope to solve the problem in time, we need as much attention on it as possible. There are some smart people at ITAS and elsewhere, and it would be great to get them to work on the problem, even if it seems a bit late. Maybe we need just one brilliant idea, and the more people are searching for it, the more likely it is to find it, I think. It could also be that there is no solution, in which case it is even more important that as many people as possible agree on that, the more established and accepted, the better. If we need regulation, or try to implement a global ban or freeze on AGI research, we need as much support as possible.
So that’s what I’m trying to do, with my limited outreach outside of the AI alignment community. My participation in AISC taught me many things and helped me get my message straight. A lot of it will probably find its way into my next novel. And maybe our tabletop RPG will also help spreading the message. All in all, I think it was a good idea to broaden the scope of AISC a bit, and I recommend doing it again. Thank you very much, Remmelt, Daniel, and all the others for taking me in!
I think it’s great that you’re thinking about how you can use your writing skills to further alignment. If you’re thinking about contacting politicians or people who are famous, I’d suggest reaching out to CEA’s community health team first for advice on how to ensure this goes well.
As a participant, I probably don’t fit the “typical” AISC profile: I’m a writer, not a researcher (even though I’ve got a Ph.D. in symbolic AI), I’m at the end of my career, not the beginning (I’m 61). That I’m part of AISC is due to the fact that this time, there was a “non-serious” topic included in the camp’s agenda: Designing an alignment tabletop role-playing game (based on an idea by Daniel Kokotajlo). Is this a good thing?
For me, it certainly was. I came to AISC mostly to learn and get connections into the AI alignment community, and this worked very well. I feel like I know a lot less about alignment than I thought I knew at the start of the camp, which is a sure sign that I learned a lot. And I made a lot of great and inspiring contacts, even friendships, some of which I think will stay long after the camp is over. So I’m extremely happy and grateful that I had the opportunity to participate.
But what use am I to AI alignment? Well, together with another participant, Jan Kirchner, I did try to contribute an idea, but I’m not sure how helpful that is. However, one thing I can do: As a writer, I can try to raise awareness for the problem. That is the reason I participated in the first place. I see a huuuuuge gap between the importance and urgency of AI alignment and the attention it gets outside the community, among people who probably could do something about it, e.g. politicians and “established” scientists. For example, in Germany, we have the “Institut für Technikfolgenabschätzung” (ITAS) which claims on its website to be the leading institute for technology assessment. I asked them whether they are working on AI alignment. Apparently, they aren’t even aware that there IS a problem. The same seems to be true for the scientific establishment in the rest of Germany and the EU.
You may question how helpful it is to get people like them to work on alignment. But I think that if we hope to solve the problem in time, we need as much attention on it as possible. There are some smart people at ITAS and elsewhere, and it would be great to get them to work on the problem, even if it seems a bit late. Maybe we need just one brilliant idea, and the more people are searching for it, the more likely it is to find it, I think. It could also be that there is no solution, in which case it is even more important that as many people as possible agree on that, the more established and accepted, the better. If we need regulation, or try to implement a global ban or freeze on AGI research, we need as much support as possible.
So that’s what I’m trying to do, with my limited outreach outside of the AI alignment community. My participation in AISC taught me many things and helped me get my message straight. A lot of it will probably find its way into my next novel. And maybe our tabletop RPG will also help spreading the message. All in all, I think it was a good idea to broaden the scope of AISC a bit, and I recommend doing it again. Thank you very much, Remmelt, Daniel, and all the others for taking me in!
I think it’s great that you’re thinking about how you can use your writing skills to further alignment. If you’re thinking about contacting politicians or people who are famous, I’d suggest reaching out to CEA’s community health team first for advice on how to ensure this goes well.
Thank you, I will!