Thanks for writing! I don’t see an actual answer to the question asked in the beginning—“Given the ongoing history of continually increasing compute power, what is the maximum compute power that might be available to train ML models in the coming years?” Did I miss it?
wunan
And would a doctor’s note make any difference towards them allowing you to wear something like a versaflo or other PAPR?
Does anybody know what is the best mask that they’ll allow you to wear on an airplane? Has anyone worn a P100 with the exhale valve covered, or do they not allow that?
hopefully realize it’s a bad idea have a morality that allows this
To expand on this: https://www.nickbostrom.com/papers/unilateralist.pdf
What do you mean by “immune erosion”? Is this different than “immune evasion” and “immune escape”? I can’t find any explanation on google—is this a standard term?
What is meant by “immune erosive”? Is this different than “immune evasive”? I can’t find any explanation on google—is this a standard term?
If it’s a normal distribution, what’s the standard deviation?
For software development, rewriting the code from scratch is typically a bad idea. It may be helpful to see how well the arguments in that article apply to your domain.
Context for anyone who’s not aware:
Nerd sniping is a slang term that describes a particularly interesting problem that is presented to a nerd, often a physicist, tech geek or mathematician. The nerd stops all activity to devote attention to solving the problem, often at his or her own peril
If MIRI hasn’t already, it seems to me like it’d be a good idea to try reaching out. It also seems worth being at least a little bit strategic about it as opposed to, say, a cold email.
+1 especially to this—surely MIRI or a similar x-risk org could attain a warm introduction with potential top researchers through their network from someone who is willing to vouch for them.
On one hand, meditation—when done without all the baggage, hypothetically—seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc.
I think meditation should be treated similarly to psychedelics—even for meditators who don’t think of it in terms of anything supernatural, it can still have very large and unpredictable effects on the mind. The more extreme the style of meditation (e.g. silent retreats), the more likely this sort of thing is.
Any subgroups heavily using meditation seem likely to have the same problems as the ones Eliezer identified for psychedelics/woo/supernaturalism.
Possible small correction: GPT-2 to GPT-3 was 16 months, not 6. The GPT-2 paper was published in February 2019 and the GPT-3 paper was published in June 2020.
I can’t tell from the descriptions, but it seems like these programs have been run before—is that right? Are there any reviews or other writeups about participants’ experiences anywhere?
That would make a good monthly open thread.
If compute is the main bottleneck to AI progress, then one goalpost to watch for is when AI is able to significantly increase the pace of chip design and manufacturing. After writing the above, I searched for work being done in this area and found this article. If these approaches can actually speed up certain steps in this process from taking weeks to just taking a few days, will that increase the pace of Moore’s law? Or is Moore’s law mainly bottlenecked by problems that will be particularly hard to apply AI to?
Do you have some examples? I’ve noticed that rationalists tend to ascribe good faith to outside criticisms too often, to the extent that obviously bad-faith criticisms are treated as invitations for discussions. For example, there was an article about SSC in the New Yorker that came out after Scott deleted SSC but before the NYT article. Many rationalists failed to recognize the New Yorker article as a hit piece which I believe it clearly was, even more clearly now that the NYT article has come out.
Yeah, my main takeaway from that question was that a change in the slope of of the abilities graph was what would convince him of an imminent fast takeoff. Presumably the x axis of the graph is either time (i.e. the date) or compute, but I’m not sure what he’d put on the Y axis and there wasn’t enough time to ask a followup question.
Even without having a higher IQ than a peak human, an AGI that merely ran 1000x faster would be transformative.
Of the bottlenecks I listed above, I am going to mostly ignore talent. IMO, talented people aren’t the bottleneck right now, and the other problems we have are more interesting.
Can you clarify what you mean by this? I see two main possibilities for what you might mean:
There are many talented people who want to work on AI alignment, but are doing something else instead.
There are many talented people working on AI alignment, but they’re not very productive.
If you mean the first one, I think it would be worth it to survey people who are interested in AI alignment but are currently doing something else—ask each of them, why aren’t they working on AI alignment? Have they ever applied for a grant or job in the area? If not, why not? Is money a big concern, such that if it were more freely available they’d start working on AI alignment independently? Or is it that they’d want to join an existing org, but open positions are too scarce?
Did your previous experiences with VR involve something where your in-game movement wasn’t one-to-one with your actual movement (e.g. where you could move your character by pushing forward on an analog stick, rather than by walking)? It’s pretty rare for VR games with one-to-one movement (like Beat Saber and TotF) to cause motion sickness, so if your previous sickness was in a non-one-to-one game it may be worth giving VR another shot with a more comfortable game.