As a data point for why this might be occurring. I may be an outlier, but I’ve not had much luck getting replies or useful dialogue from X-risk related organisations in response to my attempts at communications.
My expectation, currently. is that if I apply I won’t get a response and I will have wasted my time trying to compose an application. I won’t get any more information than I previously had.
If this isn’t just me, you might want to encourage organisations to be more communicative.
My view is more or less the one Eliezer points to here:
The big big problem is, “Nobody knows how to make the nice AI.” You ask people how to do it, they either don’t give you any answers or they give you answers that I can shoot down in 30 seconds as a result of having worked in this field for longer than five minutes.
There are probably no fire alarms for “nice AI designs” either, just like there are no fire alarms for AI in general.
Why should we expect people to share “nice AI designs”?
For longer time frames where there might be visible development, the public needs to trust that the political regulators of AI to have their interests at heart. Else they may try and make it a party political issue, which I think would be terrible for sane global regulation.
I’ve come across pretty strong emotion when talking about AGI even when talking about safety, which I suspect will come bubbling to the fore more as time goes by.
It may also help morale of the thoughtful people trying to make safe AI.
I think part of the problem is that corporations are the main source of innovation and they have incentives to insert themselves into the things they invent so that they can be trolls and sustain their business.
Compare email and facebook messenger for two different types of invention, with different abilities to extract tolls. However if you can’t extract a toll, it is unlikely you can create a business around innovation in an area.
I had been thinking about metrics for measuring progress towards shared agreed outcomes as a method of co-ordination between potentially competitive powers to avoid arms races.
I passed around the draft to a couple of the usual suspects in the ai metrics/risk mitigation in hopes of getting collaborators. But no joy. I learnt that Jack Clark of OpenAI is looking at that kind of thing as well and is a lot better positioned to act on it, so I have hopes around that.
Moving on from that I’m thinking that we might need a broad base of support from people (depending upon the scenario) so being able to explain how people could still have meaningful lives post AI is important for building that support. So I’ve been thinking about that.
To me closed loop is impossible not due to taxes but due to desired technology level. I could probably go buy a plot of land and try and recreate iron age technology. But most likely I would injure myself, need medical attention and have to reenter society.
Taxes aren’t also an impediment to close looped living as long as waste from the tax is returned. If you have land with a surplus of sunlight or other energy you can take in waste and create useful things with it (food etc). The greater loop of taxes has to be closed as well as well as the lesser loop.
From an infosec point of view, you tend to rely on responsible disclosure. That is you tell people that will be most affected or that can solve the problem for other people, they can create counter measures and then you release those counter measures to everyone else (which gives away the vulnerability as well), who should be in a position to quickly update/patch.
Otherwise you are relying on security via obscurity. People may be vulnerable and not know it.
There doesn’t seem to be a similar pipeline for non-computer security threats.
Similarly, it is not irrational to want to form a cartel or political ingroup. Quite the opposite. It’s like the concept of economic moat, but for humans.
And so you get the patriarchy and the reaction to it feminism. This leads to the culture wars that we have to day. So it is locally optimal but leads to problems in the greater system.
How do we escape this kind of trap?
I’m reminded of the quote by George Bernard Shaw.
“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”
I think it would be interesting to look at the reasons and occasions not to follow “standard” incentives.
I’ve been re-reading a sci-fi book which has the interesting Existential Risk scenario where most people are going to die. But some may survive.
If you are a person on earth in the book, you have the choice of helping out people and definitely dieing or trying desperately to be one of the ones to survive (even if you personally might not be the best person to help humanity survive).
In that situation I would definitely be in the “helping people better suited for surviving” camp. Following orders because the situation was too complex to keep in one persons head. Danger is fine because you are literally a dead person walking.
It becomes harder when the danger isn’t so clear and present. I’ll think about it a bit more.
The title of the book is frirarirf (rot13)
rm -f double-post
She asked my advice on how to do creative work on AI safety, on facebook. I gave her advice as best I could.
She seemed earnest and nice. I am sorry for your loss.
Dulce et Decorum Est Pro Huminatas Moria?
As you might be able to tell from the paraphrased quote I’ve been taught some bad things that can happen when this is taken too far.
Therefore the important thing is how we, personally, would engage with that decision if it came from outside.
For me it depends on my opinion of the people on the outside. There are four things I weigh:
Epistemic rigour. With lots of crucial considerations around existential risk, do I believe that the outside has good views on the state of the world? If they do not, they/I may be doing more harm than good.
Are they trying to move to better equilibria? Do they believe in winner take all or are they trying to plausibly pre-commit to sharing the winnings (with other people who are trying to plausibly pre-commit to sharing the winnings). Are they trying to avoid the race to bottom? It doesn’t matter if they can’t, but not trying at all means that they may miss out on better outcomes.
Feedback mechanisms: How is the outside trying to make itself better? It may not be good enough in the first two items, but do they have feedback mechanisms to improve them?
Moral uncertainty. What is their opinion on moral theory. They/I may do some truly terrible things if they are too sure of themselves.
My likelihood of helping humanity when following orders stems from those considerations. It is a weighty decision.
I’m interested in seeing where you go from here. With the old lesswrong demographic, I would predict you would struggle, due to cryonics/life extension being core to many people’s identities.
I’m not so sure about current LW though. The fraction of the EA crowd that is total utilitarian probably won’t be receptive.
I’m curious what it is that your intuitions do value highly. It might be better to start with that.
Has anyone done work on a AI readiness index? This could track many things, like the state of AI safety research and the roll out of policy across the globe. It might have to be a bit dooms day clock-ish (going backwards and forwards as we understand more) but it might help to have a central place to collect the knowledge.
Out of curiosity what is the upper bound on impact?
Do you think the AI-assisted humanity is in a worse situation than humanity is today?
Lots of people involved in thinking about AI seem to be in a zero sum, winner-take-all mode. E.g. Macron.
I think there will be significant founder effects from the strategies of the people that create AGI. The development of AGI will be used as an example of what types of strategies win in the future during technological development. Deliberation may tell people that there are better equilibrium. But empiricism may tell people that they are too hard to reach.
Currently the positive-sum norm of free exchange of scientific knowledge is being tested. For good reasons, perhaps? But I worry for the world if lack of sharing of knowledge gets cemented as the new norm. It will lead to more arms races and make coordination harder on the important problems. So if the creation of AI leads to the destruction of science as we know it, I think we might be in a worse position.
I, perhaps naively, don’t think it has to be that way.
Interesting. I didn’t know Russia’s defences had degraded so much.
I’m curious what type of nuclear advantage you think America has. It is is still bound by MAD due to nukes on submersibles.
I think that US didn’t have a sufficient intelligence capability to know where to inspect. Take Israel as an example.
CIA were saying in 1968 ”...Israel might undertake a nuclear weapons program in the next several years”. When Israel had already built a bomb in 1966.
While I think the US could have threatened the soviets into not producing nuclear weapons at that point in time, I think I have trouble seeing how the US could put in the requisite controls/espionage to prevent India/China/Uk etc from developing nuclear weapons later on.