So what are our Mid-Term goals?

I know that Rationalism is fairly spread out as an ideology, but, thankfully, very few rationalists seem to subscribe to the popular belief that allowing someone you can’t see to suffer by your inaction is somehow different from walking on while some kid you don’t know bleeds out on the pavement. So, if most of us are consequentialist altruists, precisely what should we be doing? SIAI are working on the Silver-Bullet project, for example, but what about the rest of us? Giles is attempting to form a altruist community, and I’m sure there are a number of other, scattered, projects members are working on independently. (I searched for any previous discussions, but didn’t find any. If there are some I missed, please send me the links.)

However, a lot of the community’s mid-term plans seem to be riding on the success of the SIAI project, and although I am not qualified to judge it’s probability… not having other plans in the event it fails, when most of us don’t have any skills that would contribute to the Friendly AI project anyway, seems overly-hopeful. There are of course several short-term projects, the Rationalist Boot-Camps for example, but they don’t currently seem to be one of the main focuses.

I suppose, what I’m trying to ask without stepping on too many SIAI researchers toes, is what should non-researchers who want to help be doing in case it doesn’t work, and why?