The only real answers at this point seem like mass public advocacy
If AI timelines are short, then I wouldn’t focus on public advocacy, but the decision-makers. Public opinion changes slowly and succeeding may even interfere with the ability of experts to make decisions.
I would also suggest that someone should focus on within-EA advocacy too (whilst being open about any possible limitations or uncertainties in their understanding).
Sorry, I got tricked:
petrov_day_admin_account September 26, 2020 11:26 AM
You are part of a smaller group of 30 users who has been selected for the second part of this experiment. In order for the website not to go down, at least 5 of these selected users must enter their codes within 30 minutes of receiving this message, and at least 20 of these users must enter their codes within 6 hours of receiving the message. To keep the site up, please enter your codes as soon as possible. You will be asked to complete a short survey afterwards.
I am very sorry to hear about your experiences. I hope you’ve found peace and that the organizations can take your experiences on board.
On one hand you seem to want there to be more open discussion around mental health, whilst on the other you are criticising MIRI and CFAR for having people have mental health issues in their orbit. These seem somewhat in tension with each other.
I think one of the factors is that the mission itself is stressful. For example, air traffic control and the police are high stress careers, yet we need both.
Another issue is that rationality is in some ways more welcoming of (at least some subset) people whom society would seem weird. Especially since certain conditions can be paired with great insight or drive. It seems like the less a community appreciates the silver-lining of mental health issues, the better they’d score according to your metric.
Regarding secrecy, I’d prefer for AI groups to lean too much on the side of maintaining precautions about info-hazards than too little. (I’m only referring to technical research not misbehaviour). I think it’s perfectly valid for donors to decide that they aren’t going to give money without transparency, but there should also be respect for people who are willing to trust/make the leap of faith.
It’s not the environment for everyone, but then again, the same could be said about the famously secretive Apple or say, working in national security. That seems like less of a organizational problem than one of personal fit (other alignment organizations and grant opportunities seem less secretive). One question: Would your position have changed it you felt that they had been more upfront about the stresses of the job?
Another point: I put the probability of us being screwed without Eliezer very high. This is completely different from putting him on a pedestal and pretending he’s perfect or that no-one can ever exceed his talents. Rather, my model is that without him it likely likely would have taken a number of more years for AI safety research to really kick off. And in this game even a few years can really matter. Now other people will have a more detailed model of the history than me and may come to different conclusions. But it’s not really a claim that’s very far out there. Of course, I don’t know who said it or the context so its hard for me to update on this.
Regarding people being a better philosopher than Kant or not, we have to take into account that we have access to much more information today. Basic scientific knowledge and understanding of computers resolves many of the disputes that philosophers used to argue over. Further, even if someone hasn’t read Kant directly, they may very well have read people who have read Kant. So I actually think that there would be a huge number of people floating around who would be better philosophers than Kant given these unfair advantages. Of course, it’s an entirely different question if we gave Kant access to today’s knowledge, but then again, given population growth, it wouldn’t surprise me if there were still a significant number of them.
One point I strongly agree with you on is that rationalists should pay more attention to philosophy. In fact, my most recent theory of logical counterfactuals was in part influenced by Kant—although I haven’t had time to read his critique of pure reason yet.
Look at the sidebar here? Is this anywhere near optimal? I don’t think so. Surely it should be encouraging people to undertake logical first steps towards becoming involved in alignment (ie. AGI safety fundamentals course, 80,000 hours coaching or booking a call with AI Safety Support).
In a few weeks, I’ll probably be spending a few hours setting up a website for AI Safety Australia and NZ (a prospective org to do local movement-building). Lots of people have web development capabilities, but you don’t even need that with things like Wordpress.I’ve been spending time going through recent threads and encouraging people who’ve expressed interest in doing something about this, but are unsure what to do, to consider a few logical next steps.
Or maybe just reading about safety and answering questions on the Stampy Wiki (https://stampy.ai)?
Or failing everything else, just do some local EA movement building and make sure to run a few safety events.
I don’t know, it just seems like there’s low-hanging fruit all over the place. Not claiming these are huge impacts, but beats doing nothing.