# strawberry calm

Karma: 341
• I agree that Pinker’ advice is moderate — e.g. he doesn’t prohibit authors from self-reference.

But this isn’t because classic style is moderate — actually classic style is very strict — e.g. it does prohibit authors from self-reference.

Rather, Pinker’s advice is moderate because he weakly endorses classic style. His advice is “use classic style except in rare situations where this would be bad on these other metric.

If I’ve read him correctly, then he might agree with all the limitations of classic style I’ve mentioned.

(But maybe I’ve misread Pinker. Maybe he endorses classic style absolutely but uses “classic style” to refer to a moderate set of rules.)

• 24 Nov 2022 17:14 UTC
3 points
1 ∶ 0

When reading an academic paper, you don’t find it useful when the author points out their contributions? I definitely do. I like to know whether the author asserts because it’s the consensus in the field, or whether the author asserts because that’s the conclusion of the data. If I later encounter strong evidence against then this difference matters — it determines whether I update against that particular author or against the whole field.

• 24 Nov 2022 16:42 UTC
2 points
1 ∶ 0
in reply to: Rosencrantz ’s comment

Writing can definitely be overly “self-aware” sometimes (trust me I know!) but “classic style” is waaaayyy too restrictive.

My rule of thumb would be:

If you know that and you expect that the reader’s beliefs about the subject matter would significantly change if they also knew , then write that .

This will include sentences about the document and the author — rather than just the subject.

• 23 Nov 2022 23:39 UTC
4 points
2 ∶ 0

I think classic style is bad for all the situations that Pinker endorses it:

• Non-fiction books

• Textbooks

• Blog posts

• Manuals

This is because I can’t think of any situations where the five limitations I mention would be appropriate.

• Quick emarks and questions:

1. AI developers have been competing to solve purely-adversarial /​ zero-sum games, like Chess or Go. But Diplomacy, in contrast, is semi-cooperative. Will be safer if AGI emerges from semi-cooperative games than purely-adversarial games?

2. Is it safer if AGI can be negotiated with?

3. No-Press Diplomacy was solved by DeepMind in 2020. MetaAI was just solved Full-Press Diplomacy. The difference is that in No-Press Diplomacy the players can’t communicate whereas in Full-Press Diplomacy the players can chat for 5 minutes between rounds.

Is Full-Press more difficult than No-Press Diplomacy, other than the skill of communicating one’s intentions?

Full-Press Diplomacy requires a recursive theory of mind — does No-Press Diplomacy also?

4. CICERO consists of a planning engine and a dialogue engine. How much of the “intelligence” is the dialogue engine?

Maybe the planning engine is doing all the work, and the dialogue engine is just converting plans into natural language, but isn’t doing anything more impressive than that.

Alternatively, it might be that the dialogue engine (which is a large language model) is containing latent knowledge and skills.

5. Could an architecture like this actually be used in international diplomacy and corporate negotiations? Will it be?

6. There’s hope among the AI Safety community that competent-but-not-yet-dangerous AI might assist them in alignment research. Maybe this Diplomacy result will boost hope in the AI Governence community that competent-but-not-yet-dangerous AI might assist them in governance. Would this hope be reasonable?

• EA is constrained by the following formula:

Number of Donors x Average Donation = Number of Grants x Average Grant

If we lose a big donor, there are four things EA can do:

1. Increase the number of donors:

1. Outreach. Community growth. Might be difficult right now for reputation reasons, though fortunately, EA was very quick to denounce SBF.

2. Maybe lobby the government for cash?

3. Maybe lobby OpenAI, DeepMind, etc for cash?

2. Increase average donation:

1. Get another billionaire donor. Presumably, this is hard because otherwise EA would’ve done it already, but there might be factors that are hidden from me.

2. 80K could begin pushing earning-to-give again. They shifted their recommendations a few years ago to promoting direct-impact careers. This made sense when EA was less funding-constrained.

3. Get existing donors to ramp up their donations. In the good ol’ days, EA used to be a club for people donating 60% of their income to anti-malaria bednets. Maybe EA will return to that frugal ascetic lifestyle.

3. Reduce the number of grants:

1. FTX was funding a number of projects. Some of these were higher priorities than others. Hopefully the high-priority projects retain their funding, whereas low-priority projects are paused.

2. EA has been engaged in a “hit-or-miss” approach to grant-making. This makes sense when you have more cash than sure-thing ideas. But now we have less cash we should focus on sure-thing ideas.

3. The problem with the “sure-thing” approach to grant-making is that it biases certain causes (e.g. global health & dev) over others (e.g. x-risk). I think that would be a mistake. Someone needs to think about how to calibrate for this bias.

Here’s a tentative idea: EA needs more prizes and other forms of retrodictive funding. This will shift risk from the grant-maker to the researcher, which might be good because the researcher is more informed about the likelihood of success than the grant-maker.

4. Reduce average grant:

1. Maybe EA needs to focus on cheaper projects.

2. For example, in AI safety there has been a recent shift away from theoretic work (like MIRI’s decision theory) towards experimental work. This experimental work is very expensive because it involves (say) training large language models. This shift should be at least somewhat reversed.

3. Academics are very cheap! And they often already have funding. EA (especially AI safety) needs to do more outreach to established academics, such as top philosophers, mathematicians, economists, computer scientists, etc.

(Cross-post from EA forum)

• Are you saying that it’s too early to claim “SBF committed fraud”, or “SBF did something unethical”, or “if SBF committed fraud, then he did something unethical”?

I think we have enough evidence to assert all three.

• 6 Nov 2022 18:11 UTC
2 points
0 ∶ 0

There is a spectrum between two types of people, K-types and T-types.

and

I’ve tried to include views I endorse in both columns, however most of my own views are right-hand column because I am more K-type than T-type.

You’re correct that this is a spectrum rather than a strict binary. I should’ve clarified this. But I think it’s quite common to describe spectra by their extrema, for example:

• 4 Nov 2022 8:09 UTC
4 points
2 ∶ 0

You could still be doing perfect bayesian reasoning regardless of your prior credences. Bayesian reasoning (at least as I’ve seen the term used) is agnostic about the prior, so there’s nothing defective about assigned a low prior to programs with high time-complexity.

• 4 Nov 2022 8:01 UTC
2 points
0 ∶ 0

when translating between proof theory and computer science:

(computer program, computational steps, output) is mapped to (axioms, deductive steps, theorems) respectively.

kolmogorov-complexity maps to “total length of the axioms” and time-complexity maps to “number of deductive steps”.

• 3 Nov 2022 23:43 UTC
4 points
0 ∶ 0

what do you mean “the solomonoff prior is correct”? do you mean that you assign high prior likelihood to theories with low kolmogorov complexity?

this post claims: many people assign high prior likelihood to theories with low time complexity. and this is somewhat rational for them to do if they think that they would otherwise be susceptible to fallacious reasoning.

• To me it seems that it might just as well make timelines longer to depend on algorithmic innovations as opposed to the improvements in compute that would help increase parameters.

I’ll give you an analogy:

Suppose your friend is running a marathon. You hear that at the halfway point she has a time of 1 hour 30 minutes. You think “okay I estimate she’ll finish the race in 4 hours”. Now you hear she has been running with her shoelaces untied. Should you increase or decrease your estimate?

Well, decrease. The time of 1:30 is more impressive if you learn her shoelaces were untied! It’s plausible your friend will notice and tie up her shoelaces.

But note that if you didn’t condition on the 1:30 information, then your estimate would increase if you learned her shoelaces were untied for the first half.

Now for Large Language Models:

Believing Kaplan’s scaling laws, we figure that the performance of LLMs depended on the number of parameters. But maybe there’s no room for improvement in -efficiency. LLMs aren’t much more -inefficient than the human brain, which is our only reference-point for general intelligence. So we expect little algorithmic innovation. LLMs will only improve because and grows.

On the other hand, believing Hoffman’s scaling laws, we figure that the performance of LLMs depended on the number of datapoints. But there is likely room for improvement in -efficiency. The brain is far more -inefficient than LLMs. So LLMs have been metaphorically running with their shoes untied. There is room for improvement. So we’re less surprised by algorithmic innovation. LLMs will still improve because and grows, but this isn’t the only path.

So Hoffman’s scaling laws shorten our timeline estimates.

This is an important observation to grok. If you’re already impressed by how an algorithm performs, and you learn that the algorithm has a flaw which would disadvantage it, then you should increase your estimate of future performance.

• 15 Sep 2022 22:09 UTC
3 points
0 ∶ 0
Example 1: GoogleBrain spent approximately $12M to train PaLM, and$9M was wasted on suboptimal training because DeepMind didn’t share the Hoffman2022 results with them.