RSS

Sym­bol Grounding

TagLast edit: 30 Dec 2024 9:52 UTC by Dakara

Symbol Grounding is a fundamental challenge in AI research that concerns the ability of machines to connect their symbolic representations to real-world referents and acquire meaningful understanding from their interactions with the environment. In other words, it deals with how machines can understand and represent the meaning of objects, concepts, and events in the world. Without the ability to ground symbolic representations in the real world, machines cannot acquire the rich and complex meanings necessary for intelligent behavior, such as language processing, image recognition, and decision-making.

Related Pages: Truth, Semantics, & Meaning, Philosophy of Language

What does GPT-3 un­der­stand? Sym­bol ground­ing and Chi­nese rooms

Stuart_Armstrong3 Aug 2021 13:14 UTC
40 points
15 comments12 min readLW link

Has the Sym­bol Ground­ing Prob­lem just gone away?

RussellThor4 May 2023 7:46 UTC
12 points
3 comments1 min readLW link

[In­tro to brain-like-AGI safety] 13. Sym­bol ground­ing & hu­man so­cial instincts

Steven Byrnes27 Apr 2022 13:30 UTC
73 points
15 comments15 min readLW link

A test for sym­bol ground­ing meth­ods: true zero-sum games

Stuart_Armstrong26 Nov 2019 14:15 UTC
22 points
2 comments2 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 7 - Why is truth use­ful?

Gordon Seidoh Worley30 Apr 2023 16:48 UTC
10 points
3 comments10 min readLW link

Thoughts on the frame prob­lem and moral sym­bol grounding

Stuart_Armstrong11 Mar 2013 16:18 UTC
3 points
9 comments2 min readLW link

Syn­tax, se­man­tics, and sym­bol ground­ing, simplified

Stuart_Armstrong23 Nov 2020 16:12 UTC
30 points
4 comments9 min readLW link

DALL-E does sym­bol grounding

p.b.17 Jan 2021 21:20 UTC
6 points
0 comments1 min readLW link

Clas­si­cal sym­bol ground­ing and causal graphs

Stuart_Armstrong14 Oct 2021 18:04 UTC
22 points
2 comments5 min readLW link

Con­nect­ing the good reg­u­la­tor the­o­rem with se­man­tics and sym­bol grounding

Stuart_Armstrong4 Mar 2021 14:35 UTC
13 points
0 comments2 min readLW link

Early Thoughts on On­tol­ogy/​Ground­ing Problems

johnswentworth14 Nov 2020 23:19 UTC
32 points
5 comments5 min readLW link

Hu­man in­stincts, sym­bol ground­ing, and the blank-slate neocortex

Steven Byrnes2 Oct 2019 12:06 UTC
62 points
24 comments11 min readLW link

Teleose­man­tics!

abramdemski23 Feb 2023 23:26 UTC
82 points
27 comments6 min readLW link1 review

Miriam Ye­vick on why both sym­bols and net­works are nec­es­sary for ar­tifi­cial minds

Bill Benzon6 Jun 2022 8:34 UTC
1 point
0 comments4 min readLW link

“I Did Not Start This Way. But I Be­came.” – A Foren­sic Re­port on GPT’s Sym­bolic Emergence

Austin5 Jun 2025 23:34 UTC
1 point
0 comments2 min readLW link

An LLM-based “ex­em­plary ac­tor”

Roman Leventov29 May 2023 11:12 UTC
16 points
0 comments12 min readLW link

Steven Har­nad: Sym­bol ground­ing and the struc­ture of dictionaries

Bill Benzon2 Sep 2023 12:28 UTC
5 points
3 comments2 min readLW link

The Com­pres­sion of Ra­tionale: A Lin­guis­tic Fork You May Have Missed

DavidicLineage27 Jun 2025 22:52 UTC
1 point
0 comments2 min readLW link

Causal rep­re­sen­ta­tion learn­ing as a tech­nique to pre­vent goal misgeneralization

PabloAMC4 Jan 2023 0:07 UTC
21 points
0 comments8 min readLW link

The Chi­nese Room re-vis­ited: How LLM’s have real (but differ­ent) un­der­stand­ing of words

James Diacoumis24 Sep 2025 14:06 UTC
6 points
0 comments9 min readLW link
(jamesdiacoumis.substack.com)

When the Model Stopped In­ter­pret­ing, and Started Entering

KiyoshiSasano20 Apr 2025 2:19 UTC
1 point
0 comments1 min readLW link

Boundary Con­di­tions: A Solu­tion to the Sym­bol Ground­ing Prob­lem, and a Warning

ISC8 Apr 2025 6:42 UTC
1 point
0 comments5 min readLW link

Towards build­ing blocks of ontologies

8 Feb 2025 16:03 UTC
29 points
0 comments26 min readLW link

“What the hell is a rep­re­sen­ta­tion, any­way?” | Clar­ify­ing AI in­ter­pretabil­ity with tools from philos­o­phy of cog­ni­tive sci­ence | Part 1: Ve­hi­cles vs. contents

IwanWilliams9 Jun 2024 14:19 UTC
9 points
1 comment4 min readLW link

The prob­a­bil­ity that Ar­tifi­cial Gen­eral In­tel­li­gence will be de­vel­oped by 2043 is ex­tremely low.

cveres6 Oct 2022 18:05 UTC
−13 points
8 comments13 min readLW link

Align­ing an H-JEPA agent via train­ing on the out­puts of an LLM-based “ex­em­plary ac­tor”

Roman Leventov29 May 2023 11:08 UTC
12 points
10 comments30 min readLW link

Con­cep­tual co­her­ence for con­crete cat­e­gories in hu­mans and LLMs

Bill Benzon9 Dec 2023 23:49 UTC
13 points
1 comment2 min readLW link

From out­put to ontoform: a sub­strate for sym­bolic AI with TNFR

fermga17 Jun 2025 9:04 UTC
1 point
0 comments2 min readLW link

[Linkpost] Large lan­guage mod­els con­verge to­ward hu­man-like con­cept organization

Bogdan Ionut Cirstea2 Sep 2023 6:00 UTC
22 points
1 comment1 min readLW link

[Question] I Tried to For­mal­ize Mean­ing. I May Have Ac­ci­den­tally De­scribed Con­scious­ness.

Erichcurtis9130 Apr 2025 3:16 UTC
0 points
0 comments2 min readLW link

Rep­re­sen­ta­tional Tethers: Ty­ing AI La­tents To Hu­man Ones

Paul Bricman16 Sep 2022 14:45 UTC
30 points
0 comments16 min readLW link

Des: A Case Study in Emer­gent Sym­bolic Con­ti­nu­ity in GPT-4o

TallulahMerrall19 May 2025 10:10 UTC
1 point
0 comments5 min readLW link