I wouldn’t be surprised if Chinese had no irregularities in the tense system – it’s a very isolating language. But here’s one irregularity: the negation of 有 is 没有 (“to not have/possess”), but the simple negation of every other verb is 不 + verb. You can negate other verbs with 没, but then it’s implied to be 没有 + verb, which makes the verb into something like a present participle. E.g., 没吃 = “to have not eaten”.
mic
[Question] How to quantify uncertainty about a probability estimate?
It looks like there’s already another linkpost to this Medium article which got some more engagement: Zoe Curzi’s Experience with Leverage Research—LessWrong
What are your or Vassar’s arguments against EA or AI alignment? This is only tangential to your point, but I’d like to know about it if EA and AI alignment are not important.
What’s the weekly time commitment of this study group?
Anthropic says that they’re looking for experienced engineers who are able to dive into an unfamiliar codebase and solve nasty bags and/or are able to handle interesting problems with distributed systems and parallel processing. I was personally surprised to get an internship offer from CHAI and expected the bar for getting an AI safety role to be much higher. I’d guess that the average person able to get a software engineering job at Facebook, Microsoft, Google, etc. (not that I’ve ever received an offer from any of those companies), or perhaps a broader category of people, could do useful direct work, especially if they committed time to gaining relevant skills if necessary. But I might be wrong. (This is all assuming that Anthropic, Redwood, CHAI, etc. are doing useful alignment work.)
Ask AI companies about what they are doing for AI safety?
My best guess would be free bootcamp-style training for value-aligned people who are promising researchers but lack specific relevant skills. For example, ML engineering training or formal mathematics education for junior AIS researchers who would plausibly be competitive hires if that part of their background were strengthened.
The low-effort version of this would be, instead of spinning up your own bootcamp, having value-aligned people apply for a grant to the Long-Term Future Fund to participate in a bootcamp.
AI governance student hackathon on Saturday, April 23: register now!
I’m glad you’re so thoughtful about how you should speak with children!
Sooner or later though, your child will have to learn that language is non-literal, and some questions like “do you want to wait to eat your messy candy?” are actually requests. For young kids though, it seems helpful to be clearer about when you’re offering real choices or not.
to maximise my chance to die with dignity I should quit my job and take out a bunch of loans try to turbo through an advanced degree in machine learning
This is probably pretty tangential to the overall point of your post, but you definitely don’t need to take loans for this, since you could apply for funding from Open Philanthropy’s early-career funding for individuals interested in improving the long-term future or the Long-Term Future Fund.
You don’t have to have a degree in machine learning. Besides machine learning engineering or machine learning research there are plenty of other ways to help reduce existential risk from AI, such as:
software engineering at Redwood Research or Anthropic
operations for Redwood Research, Encultured AI, Stanford Existential Risks Initiative, etc.
community-building work for a local AI safety group (e.g., at MIT or Oxford)
or something part-time like participating in the EA Cambridge AGI Safety Fundamentals program and then facilitating for it
Personally, my estimate of the probability of doom is much lower than Eliezer’s, but in any case, I think it’s worthwhile to carefully consider how to maximize your positive impact on the world, whether that involves reducing existential risk from AI or not.
I’d second the recommendation for applying for career advising from 80,000 Hours or scheduling a call with AI Safety Support if you’re open to working on AI safety.
Or other AI alignment organizations like Anthropic, the Fund for Alignment Research, or Aligned AI.
What’s the EA UCLA AI Timelines Workshop? Might be interested in running something similar at Georgia Tech.
Lots of other positions at Jobs in AI safety & policy − 80,000 Hours too! E.g., from the Fund for Alignment Research and Aligned AI. But note that the 80,000 Hours jobs board lists positions from OpenAI, DeepMind, Baidu, etc. which aren’t actually alignment-related.
Ah EA UCLA just wrote a post about it at We Ran an AI Timelines Retreat—EA Forum (effectivealtruism.org)
AI safety university groups: a promising opportunity to reduce existential risk
Roughly how many hours do you expect it takes to complete the course?
What does Loom refer to? Not Loom.com, the service for recording video snippets of your screen, right?
Has EA invested much into banning gain-of-function research? I’ve heard about Alvea and 1DaySooner, but no EA projects aimed at gain-of-function. Perhaps the relevant efforts aren’t publicly known, but I wouldn’t be shocked if more person-hours have been invested in EA community building in the past two years (for example) than banning gain-of-function research.
“How are we counting Chinese versus non-Chinese papers? Because often, it seems to be just doing it via, “Is their last name Chinese?” Which seems like it really is going to miscount.” seems unreasonably skeptical. It’s not too much harder to just look up the country of the university/organization that published the paper.
I’m not sure what the source is for the statement that “China publishes more papers on deep learning than the US”, but in their 2018 Report, AI Index describes their country affiliation methodology as such: “An author’s country affiliation is determined based on his or her primary organization, which is provided by authors of the papers. Global organizations will use the headquarters’ country affiliation as a default, unless the author is specific in his/her organization description. For example, an author who inputs “Google” as their organization will be affiliated with the United States, one that inputs “Google Zurich” will be affiliated with Europe. Papers are double counted when authors from multiple geographies collaborate. For example, a paper with authors at Harvard and Oxford will be counted once for the U.S. and once for Europe.”