There are a lot of different ways you can talk about “efficiency” here. The main thing I am thinking about with regard to the key question “how much FLOP would we expect transformative AI to require?” is whether, when using a neural net anchor (not evolution) to add a 1-3 OOM penalty to FLOP needs due to 2022-AI systems being less sample efficient than humans (requiring more data to produce the same capabilities) and with this penalty decreasing over time given expected algorithmic progress. The next question would be how much more efficient potential AI (e.g., 2100-AI not 2022-AI) could be given fundamentals of silicon vs. neurons, so we might know how much algorithmic progress could affect this.
I think it is pretty clear right now that 2022-AI is less sample efficient than humans. I think other forms of efficiency (e.g., power efficiency, efficiency of SGD vs. evolution) are less relevant to this.
Yeah ok 80%. I also do concede this is a very trivial thing, not like some “gotcha look at what stupid LMs can’t do no AGI until 2400”.
This is admittedly pretty trivial but I am 90% sure that if you prompt GPT4 with “Q: What is today’s date?” it will not answer correctly. I think something like this would literally be the least impressive thing that GPT4 won’t be able to do.
Is it ironic that the link to “All the posts I will never write” goes to a 404 page?
Does it get better at Metaculus forecasting?
This sounds like something that could be done as an organization creating a job for it, which could help with mentorship/connections/motivation/job security relative to expecting people to apply to EAIF/LTFFMy organization (Rethink Priorities) is currently hiring for research assistants and research fellows (among other roles) and some of their responsibilities will include distillation.
These conversations are great and I really admire the transparency. It’s really nice to see discussions that normally happen in private happen instead in public where everyone can reflect, give feedback, and improve their own thoughts. On the other hand, the combined conversations combined to a decent-sized novel—LW says 198,846 words! Is anyone considering investing heavily in summarizing the content for people to get involved without having to read all that content?
I don’t recall the specific claim, just that EY’s probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn’t have such confidence.
I think people conflate the very reasonable “I am not going to adopt your 95-99% range because other thoughtful people disagree and I have no particular reason to trust you massively more than I trust other people” with the different “the fact that other thoughtful people mean there’s no way you could arrive at 95-99% confidence” which is false. I think thoughtful people disagreeing with you is decent evidence you are wrong but can still be outweighed.
So it looks like we survived? (Yay)
I will be on the lookout for false alarms.
I can see whether the site is down or not. Seems pretty clear.
Attention LessWrong—I am a chosen user of EA Forum and I have the codes needed to destroy LessWrong. I hereby make a no first use pledge and I will not enter my codes for any reason, even if asked to do so. I also hereby pledge to second strike—if the EA Forum is taken down, I will retaliate.
Seems like “the right prompt” is doing a lot of work here. How do we know if we have given it “the right prompt”?
Do you think GPT-4 could do my taxes?
1.) I think the core problem is that honestly no one (except 80K) actually is investing significant effort on growing the EA community since 2015 (especially comparable to the pre-2015 effort and especially as a percentage of total EA resources)
2.) Some of these examples are suspect. The GiveWell numbers definitely look to be increasing beyond 2015, especially when OpenPhil’s understandably constant fundraising is removed—and this increase in GiveWell seems to line up with GiveWell’s increased investment in their outreach. The OpenPhil numbers also look just to be sensitive to a few dominant eight figure grants, which understandably are not annual events. (Also my understanding is that Open Phil is starting off slowly intentionally but will aim to ramp up significantly in the near future.)
3.) As I capture in “Is EA Growing? EA Growth Metrics for 2018”, many relevant EA growth statistics have peaked after 2015 or haven’t peaked yet.
4.) There are still a lot of ways EA is growing other than what is captured in these graphs. For example, I bet something like total budget of EA orgs has been growing a lot even since 2015.
5.) Contrary to the “EA is inert” hypothesis, EA Survey data has shown that many people have been “convinced” of EA. Furthermore, our general population surveys show that the vast majority of people (>95% of US university students) have still not heard of EA.
FWIW I I put together “Is EA Growing? EA Growth Metrics for 2018” and I’m looking forward for doing 2019+2020 soon
Mr. Money Mustache has a lot of really good advice that I find a lot of value from. However, I think Mr. Money Mustache underestimates the ease and impact of opportunities to grow income relative to cutting spending—especially if you’re in (or can be in) a high-earning field like tech. Doubling your income will put you on a much faster path than cutting your spending a further 5%.
PredictionBook is really great for lightweight, private predictions and does everything you’re looking for. Metaculus is great for more fully-featured predicting and I believe also supports private questions, but may be a bit of overkill for your use case. A spreadsheet also seems more than sufficient, as others have mentioned.
Thanks. I’ll definitely aim to produce them more quickly… this one got away from me.