I’ve been loving your optimization posts so far; thanks for writing them. I’ve been feeling confused about this topic for a while and feel like “being able to answer any question about optimization” would be hugely valuable for me.
maxnadeau
Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley
I was thinking yesterday that I’m surprised more EAs don’t hunt or eat lots of mail-ordered hunted meat, like eg this. Regardless of whether you think nature should exist in the long term, as it stands the average deer, for example, has a pretty harsh life and death. Studies like this on American white-tailed deer enumerate the alternative modes of death, which I find universally unappealing. You’ve got predation (which surprisingly to me is the number one cause of death for fawns), car accidents, disease, and starvation. These all seem orders of magnitude worse than being killed by a hunter with a good shot.
I’d assume human hunting basically trades off against predation and starvation, so the overall quantity of deer and deer consciousness isn’t affected much by hunting. The more humans kill, the fewer coyotes kill.
Edit: So it seems to me that buying hunted meat/encouraging hunting might have a better animal welfare profile than veganism, while also satisfying Richard’s concerns about nutrition and satisfying meat cravings. That being said, it is not really scalable in the way veg*ism is.
Confusion:
You write “Only PaLM looks better than Chinchilla here, mostly because it trained on 780B tokens instead of 300B or fewer, plus a small (!) boost from its larger size.”
But earlier you write:
“Chinchilla is a model with the same training compute cost as Gopher, allocated more evenly between the two terms in the equation.
It’s 70B params, trained on 1.4T tokens of data”
300B vs. 1.4T. Is this an error?
Video link in the pdf doesn’t work
What are the considerations around whether to structure the debate to permit the judge to abstain (as Michael et al do, by allowing the judge to end the round with low credence) versus forcing the judge to pick an answer each time? Are there pros/cons to each approach? Any arguments about similarity of one or the other to the real AI debates that might be held in the future?
It’s possible I’m misremembering/misunderstanding the protocols used for the debate here/in that other paper.
I appreciate you transcribing these interviews William!
Did/will this happen?
Thanks, fixed
We’re expecting familiarity with PyTorch, unlike MLAB. The level of Python background expected is otherwise similar. The bar will vary somewhat depending on each applicant’s other traits, e.g. mathematical and empirical-science backgrounds
30 min, 45 min, 20-30 min (respectively)
I agree with your description about the hassle of eating veg when away from home. The point I was trying to make is that buying hunted meat seems possibly ethically preferable to veganism on animal welfare grounds, would address Richard’s nutritional concerns, and also satisfies meat cravings.
Of course, this only works if you condition on the brutality of nature as the counterfactual. But for the time being, that won’t change.
“Follow the right people on twitter” is probably the best option. People will often post twitter threads explaining new papers they put out. There’s also stuff like:
News put together by CAIS: https://newsletter.mlsafety.org/ and https://newsletter.safe.ai/ and https://twitter.com/topofmlsafety
News put together by Daniel Paleka: https://newsletter.danielpaleka.com/ and twitter summaries like https://twitter.com/dpaleka/status/1664617835178631170