RSS

Sym­bol Grounding

TagLast edit: 14 Nov 2021 8:28 UTC by Yoav Ravid

Symbol Grounding

Related Pages: Truth, Semantics, & Meaning, Philosophy of Language

Has the Sym­bol Ground­ing Prob­lem just gone away?

RussellThor4 May 2023 7:46 UTC
12 points
3 comments1 min readLW link

What does GPT-3 un­der­stand? Sym­bol ground­ing and Chi­nese rooms

Stuart_Armstrong3 Aug 2021 13:14 UTC
40 points
15 comments12 min readLW link

A test for sym­bol ground­ing meth­ods: true zero-sum games

Stuart_Armstrong26 Nov 2019 14:15 UTC
22 points
2 comments2 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 7 - Why is truth use­ful?

Gordon Seidoh Worley30 Apr 2023 16:48 UTC
10 points
3 comments10 min readLW link

Clas­si­cal sym­bol ground­ing and causal graphs

Stuart_Armstrong14 Oct 2021 18:04 UTC
22 points
2 comments5 min readLW link

Hu­man in­stincts, sym­bol ground­ing, and the blank-slate neocortex

Steven Byrnes2 Oct 2019 12:06 UTC
60 points
23 comments11 min readLW link

Syn­tax, se­man­tics, and sym­bol ground­ing, simplified

Stuart_Armstrong23 Nov 2020 16:12 UTC
30 points
4 comments9 min readLW link

DALL-E does sym­bol grounding

p.b.17 Jan 2021 21:20 UTC
6 points
0 comments1 min readLW link

Thoughts on the frame prob­lem and moral sym­bol grounding

Stuart_Armstrong11 Mar 2013 16:18 UTC
3 points
9 comments2 min readLW link

Con­nect­ing the good reg­u­la­tor the­o­rem with se­man­tics and sym­bol grounding

Stuart_Armstrong4 Mar 2021 14:35 UTC
13 points
0 comments2 min readLW link

Early Thoughts on On­tol­ogy/​Ground­ing Problems

johnswentworth14 Nov 2020 23:19 UTC
32 points
5 comments5 min readLW link

[In­tro to brain-like-AGI safety] 13. Sym­bol ground­ing & hu­man so­cial instincts

Steven Byrnes27 Apr 2022 13:30 UTC
69 points
15 comments14 min readLW link

Teleose­man­tics!

abramdemski23 Feb 2023 23:26 UTC
80 points
26 comments6 min readLW link

Miriam Ye­vick on why both sym­bols and net­works are nec­es­sary for ar­tifi­cial minds

Bill Benzon6 Jun 2022 8:34 UTC
1 point
0 comments4 min readLW link

Rep­re­sen­ta­tional Tethers: Ty­ing AI La­tents To Hu­man Ones

Paul Bricman16 Sep 2022 14:45 UTC
30 points
0 comments16 min readLW link

Align­ing an H-JEPA agent via train­ing on the out­puts of an LLM-based “ex­em­plary ac­tor”

Roman Leventov29 May 2023 11:08 UTC
12 points
10 comments30 min readLW link

An LLM-based “ex­em­plary ac­tor”

Roman Leventov29 May 2023 11:12 UTC
16 points
0 comments12 min readLW link

[Linkpost] Large lan­guage mod­els con­verge to­ward hu­man-like con­cept organization

Bogdan Ionut Cirstea2 Sep 2023 6:00 UTC
22 points
1 comment1 min readLW link

Steven Har­nad: Sym­bol ground­ing and the struc­ture of dictionaries

Bill Benzon2 Sep 2023 12:28 UTC
5 points
2 comments2 min readLW link

The prob­a­bil­ity that Ar­tifi­cial Gen­eral In­tel­li­gence will be de­vel­oped by 2043 is ex­tremely low.

cveres6 Oct 2022 18:05 UTC
−13 points
8 comments1 min readLW link

Causal rep­re­sen­ta­tion learn­ing as a tech­nique to pre­vent goal misgeneralization

PabloAMC4 Jan 2023 0:07 UTC
19 points
0 comments8 min readLW link

Con­cep­tual co­her­ence for con­crete cat­e­gories in hu­mans and LLMs

Bill Benzon9 Dec 2023 23:49 UTC
13 points
1 comment2 min readLW link