RSS

Karma: 341

When AI solves a game, fo­cus on the game’s me­chan­ics, not its theme.

23 Nov 2022 19:16 UTC
81 points
7 comments2 min readLW link

K-types vs T-types — what pri­ors do you have?

3 Nov 2022 11:29 UTC
63 points
24 comments7 min readLW link

Against “Clas­sic Style”

23 Nov 2022 22:10 UTC
63 points
29 comments4 min readLW link

Hu­man-level Full-Press Di­plo­macy (some bare facts).

22 Nov 2022 20:59 UTC
50 points
7 comments3 min readLW link

How should Deep­Mind’s Chin­chilla re­vise our AI fore­casts?

15 Sep 2022 17:54 UTC
34 points
12 comments13 min readLW link

[Question] EA (& AI Safety) has over­es­ti­mated its pro­jected fund­ing — which de­ci­sions must be re­vised?

11 Nov 2022 13:50 UTC
22 points
7 comments1 min readLW link
(forum.effectivealtruism.org)
• Quick emarks and questions:

1. AI developers have been competing to solve purely-adversarial /​ zero-sum games, like Chess or Go. But Diplomacy, in contrast, is semi-cooperative. Will be safer if AGI emerges from semi-cooperative games than purely-adversarial games?

2. Is it safer if AGI can be negotiated with?

3. No-Press Diplomacy was solved by DeepMind in 2020. MetaAI was just solved Full-Press Diplomacy. The difference is that in No-Press Diplomacy the players can’t communicate whereas in Full-Press Diplomacy the players can chat for 5 minutes between rounds.

Is Full-Press more difficult than No-Press Diplomacy, other than the skill of communicating one’s intentions?

Full-Press Diplomacy requires a recursive theory of mind — does No-Press Diplomacy also?

4. CICERO consists of a planning engine and a dialogue engine. How much of the “intelligence” is the dialogue engine?

Maybe the planning engine is doing all the work, and the dialogue engine is just converting plans into natural language, but isn’t doing anything more impressive than that.

Alternatively, it might be that the dialogue engine (which is a large language model) is containing latent knowledge and skills.

5. Could an architecture like this actually be used in international diplomacy and corporate negotiations? Will it be?

6. There’s hope among the AI Safety community that competent-but-not-yet-dangerous AI might assist them in alignment research. Maybe this Diplomacy result will boost hope in the AI Governence community that competent-but-not-yet-dangerous AI might assist them in governance. Would this hope be reasonable?

• 23 Nov 2022 23:39 UTC
4 points
2 ∶ 0
in reply to: lise’s comment

I think classic style is bad for all the situations that Pinker endorses it:

• Academic papers

• Non-fiction books

• Textbooks

• Blog posts

• Manuals

This is because I can’t think of any situations where the five limitations I mention would be appropriate.

• 4 Nov 2022 8:09 UTC
4 points
2 ∶ 0
in reply to: jacob_cannell’s comment

You could still be doing perfect bayesian reasoning regardless of your prior credences. Bayesian reasoning (at least as I’ve seen the term used) is agnostic about the prior, so there’s nothing defective about assigned a low prior to programs with high time-complexity.

• 3 Nov 2022 23:43 UTC
4 points
0 ∶ 0
in reply to: jacob_cannell’s comment

what do you mean “the solomonoff prior is correct”? do you mean that you assign high prior likelihood to theories with low kolmogorov complexity?

this post claims: many people assign high prior likelihood to theories with low time complexity. and this is somewhat rational for them to do if they think that they would otherwise be susceptible to fallacious reasoning.

• To me it seems that it might just as well make timelines longer to depend on algorithmic innovations as opposed to the improvements in compute that would help increase parameters.

I’ll give you an analogy:

Suppose your friend is running a marathon. You hear that at the halfway point she has a time of 1 hour 30 minutes. You think “okay I estimate she’ll finish the race in 4 hours”. Now you hear she has been running with her shoelaces untied. Should you increase or decrease your estimate?

Well, decrease. The time of 1:30 is more impressive if you learn her shoelaces were untied! It’s plausible your friend will notice and tie up her shoelaces.

But note that if you didn’t condition on the 1:30 information, then your estimate would increase if you learned her shoelaces were untied for the first half.

Now for Large Language Models:

Believing Kaplan’s scaling laws, we figure that the performance of LLMs depended on the number of parameters. But maybe there’s no room for improvement in -efficiency. LLMs aren’t much more -inefficient than the human brain, which is our only reference-point for general intelligence. So we expect little algorithmic innovation. LLMs will only improve because and grows.

On the other hand, believing Hoffman’s scaling laws, we figure that the performance of LLMs depended on the number of datapoints. But there is likely room for improvement in -efficiency. The brain is far more -inefficient than LLMs. So LLMs have been metaphorically running with their shoes untied. There is room for improvement. So we’re less surprised by algorithmic innovation. LLMs will still improve because and grows, but this isn’t the only path.

So Hoffman’s scaling laws shorten our timeline estimates.

This is an important observation to grok. If you’re already impressed by how an algorithm performs, and you learn that the algorithm has a flaw which would disadvantage it, then you should increase your estimate of future performance.

Is GPT-N bounded by hu­man ca­pac­i­ties? No.

17 Oct 2022 23:26 UTC
3 points
3 comments2 min readLW link
• 24 Nov 2022 17:14 UTC
3 points
1 ∶ 0
in reply to: Ben’s comment

When reading an academic paper, you don’t find it useful when the author points out their contributions? I definitely do. I like to know whether the author asserts because it’s the consensus in the field, or whether the author asserts because that’s the conclusion of the data. If I later encounter strong evidence against then this difference matters — it determines whether I update against that particular author or against the whole field.

• 15 Sep 2022 22:09 UTC
3 points
0 ∶ 0
in reply to: ESRogs’s comment

Google owns DeepMind, but it seems that there is little flow of information back and forth.

Example 1: GoogleBrain spent approximately $12M to train PaLM, and$9M was wasted on suboptimal training because DeepMind didn’t share the Hoffman2022 results with them.

Example 2: I’m not a lawyer, but I think it would be illegal for Google to share any of its non-public data with DeepMind.

• 24 Nov 2022 16:42 UTC
2 points
1 ∶ 0
in reply to: Rosencrantz ’s comment

Writing can definitely be overly “self-aware” sometimes (trust me I know!) but “classic style” is waaaayyy too restrictive.

My rule of thumb would be:

Write sentences that are maximally informative to your reader.

If you know that and you expect that the reader’s beliefs about the subject matter would significantly change if they also knew , then write that .

This will include sentences about the document and the author — rather than just the subject.

• 6 Nov 2022 18:11 UTC
2 points
0 ∶ 0
in reply to: Ansel’s comment

Thanks for the comments. I’ve made two edits:

There is a spectrum between two types of people, K-types and T-types.

and

I’ve tried to include views I endorse in both columns, however most of my own views are right-hand column because I am more K-type than T-type.

You’re correct that this is a spectrum rather than a strict binary. I should’ve clarified this. But I think it’s quite common to describe spectra by their extrema, for example:

• 4 Nov 2022 8:01 UTC
2 points
0 ∶ 0
in reply to: GregK’s comment

when translating between proof theory and computer science:

(computer program, computational steps, output) is mapped to (axioms, deductive steps, theorems) respectively.

kolmogorov-complexity maps to “total length of the axioms” and time-complexity maps to “number of deductive steps”.