RSS

Truth, Se­man­tics, & Meaning

TagLast edit: 4 May 2022 5:14 UTC by icecream17

Truth, Semantics, and Meaning: What does it mean to assert that something is true? A very popular answer is map-territory correspondence theory. But the details of this theory are not clear, and there are other contenders.

Truth as Correspondence

Many consider truth as the correspondence between reality and one’s beliefs about reality. Within this frame, truth itself is not necessarily limited to one’s belief about something. For a statement/​ideal/​proposed fact to be considered “true,” you must take it as its definition. Truth doesn’t imply that something has to be proven in order for it to be made true, but that the statement/​ideal/​proposed fact has to be true all of the time, regardless of one’s belief.

Alfred Tarski defined truth in terms of an infinite family of sentences such as:

The sentence ‘snow is white’ is true if and only if snow is white.

To understand whether a belief is true, we need (only) to understand what possible states of the world would make it true or false, and then ask directly about the world. Often, people assume that ideals and morals change with culture; as they tend to do. Unfortunately, many people struggle with their belief of “truth” based on their religion. Because of their belief, they object the currently accepted “truth” about the world, about life (how we all got here), and most importantly, what is considered “right” or “wrong.”

“Truth” is not, however, a determination. Truth is not simply a belief. Truth is an ideal, concept, or fact that can be observed. Whether an individual has a belief derived from their religion on what is truth or not, unless they have observed it, they cannot prove whether their belief is truth or not. Reiterating from above: the lack of proof or justification, or even rationalization, does not change the status of truth. What’s truth is truth, and what is false, is false. Humans simply decide to reject notions and proposed facts as truth if they are not observable, or are not able to show any proof.

‘Truth’ is a very simple concept, understood perfectly well by three-year-olds, but often made unnecessarily complicated by adults.

Other Theories of Truth

<needed>

Notable Posts

External links

See also

The Use­ful Idea of Truth

Eliezer Yudkowsky2 Oct 2012 18:16 UTC
180 points
543 comments14 min readLW link

The Sim­ple Truth

Eliezer Yudkowsky1 Jan 2008 20:00 UTC
130 points
15 comments22 min readLW link

Car­to­graphic Processes

johnswentworth27 Aug 2019 20:02 UTC
23 points
3 comments4 min readLW link

0th Per­son and 1st Per­son Logic

Adele Lopez10 Mar 2024 0:56 UTC
45 points
28 comments6 min readLW link

The Miss­ing Math of Map-Making

johnswentworth28 Aug 2019 21:18 UTC
40 points
8 comments2 min readLW link

A Chi­nese Room Con­tain­ing a Stack of Stochas­tic Parrots

RogerDearnaley12 Jan 2024 6:29 UTC
18 points
2 comments5 min readLW link

Philos­o­phy in the Dark­est Timeline: Ba­sics of the Evolu­tion of Meaning

Zack_M_Davis7 Jun 2020 7:52 UTC
126 points
16 comments14 min readLW link

By Which It May Be Judged

Eliezer Yudkowsky10 Dec 2012 4:26 UTC
89 points
941 comments11 min readLW link

Contin­gency is not arbitrary

Gordon Seidoh Worley12 Oct 2022 4:35 UTC
13 points
0 comments3 min readLW link

How webs of mean­ing grow and change

Hazard14 Aug 2020 13:58 UTC
14 points
0 comments10 min readLW link

No Log­i­cal Pos­i­tivist I

Eliezer Yudkowsky4 Aug 2008 1:06 UTC
38 points
54 comments4 min readLW link

Why Truth?

Eliezer Yudkowsky27 Nov 2006 1:49 UTC
158 points
60 comments3 min readLW link

A Priori

Eliezer Yudkowsky8 Oct 2007 21:02 UTC
80 points
133 comments4 min readLW link

Guardians of the Truth

Eliezer Yudkowsky15 Dec 2007 18:44 UTC
54 points
55 comments4 min readLW link

25 Min Talk on Me­taEth­i­cal.AI with Ques­tions from Stu­art Armstrong

June Ku29 Apr 2021 15:38 UTC
21 points
7 comments1 min readLW link

A Prag­matic Epistemology

StephenR5 Aug 2014 5:43 UTC
2 points
21 comments6 min readLW link

Astray with the Truth: Logic and Math

StephenR16 Aug 2014 15:40 UTC
4 points
21 comments8 min readLW link

Truth + Rea­son = The True Reli­gion?

David Gross17 Sep 2021 22:14 UTC
34 points
2 comments19 min readLW link

Prob­lems fac­ing a cor­re­spon­dence the­ory of knowledge

Alex Flint24 May 2021 16:02 UTC
30 points
22 comments6 min readLW link

Knowl­edge is not just map/​ter­ri­tory resemblance

Alex Flint25 May 2021 17:58 UTC
28 points
4 comments3 min readLW link

Knowl­edge is not just mu­tual information

Alex Flint10 Jun 2021 1:01 UTC
28 points
6 comments4 min readLW link

Knowl­edge is not just digi­tal ab­strac­tion layers

Alex Flint15 Jun 2021 3:49 UTC
21 points
4 comments5 min readLW link

Knowl­edge is not just pre­cip­i­ta­tion of action

Alex Flint18 Jun 2021 23:26 UTC
21 points
6 comments7 min readLW link

The ac­cu­mu­la­tion of knowl­edge: liter­a­ture review

Alex Flint10 Jul 2021 18:36 UTC
29 points
3 comments7 min readLW link

Why the Prob­lem of the Cri­te­rion Matters

Gordon Seidoh Worley30 Oct 2021 20:44 UTC
24 points
9 comments8 min readLW link

The Meta-Puzzle

DanielFilan22 Nov 2021 5:30 UTC
23 points
27 comments3 min readLW link
(danielfilan.com)

Wor­ri­some mi­s­un­der­stand­ing of the core is­sues with AI transition

Roman Leventov18 Jan 2024 10:05 UTC
5 points
2 comments4 min readLW link

Uncer­tainty in all its flavours

Cleo Nardo9 Jan 2024 16:21 UTC
25 points
6 comments35 min readLW link

The Map-Ter­ri­tory Distinc­tion Creates Confusion

Gordon Seidoh Worley4 Jan 2022 15:49 UTC
25 points
50 comments4 min readLW link

ELK Thought Dump

abramdemski28 Feb 2022 18:46 UTC
58 points
18 comments17 min readLW link

How do new mod­els from OpenAI, Deep­Mind and An­thropic perform on Truth­fulQA?

Owain_Evans26 Feb 2022 12:46 UTC
44 points
3 comments11 min readLW link

Mar­riage, the Giv­ing What We Can Pledge, and the dam­age caused by vague pub­lic commitments

Jeffrey Ladish11 Jul 2022 19:38 UTC
98 points
27 comments6 min readLW link1 review

Au­ton­omy as tak­ing re­spon­si­bil­ity for refer­ence maintenance

Ramana Kumar17 Aug 2022 12:50 UTC
56 points
3 comments5 min readLW link

Truth­seek­ing pro­cesses tend to be frame-invariant

Adele Lopez21 Mar 2023 6:17 UTC
20 points
2 comments2 min readLW link

Truth seek­ing is mo­ti­vated cognition

Gordon Seidoh Worley7 Oct 2022 19:19 UTC
6 points
39 comments3 min readLW link

AI al­ign­ment as a trans­la­tion problem

Roman Leventov5 Feb 2024 14:14 UTC
21 points
2 comments3 min readLW link

From Con­cep­tual Spaces to Quan­tum Con­cepts: For­mal­is­ing and Learn­ing Struc­tured Con­cep­tual Models

Roman Leventov6 Feb 2024 10:18 UTC
6 points
1 comment4 min readLW link
(arxiv.org)

In Defense of Parselmouths

Screwtape15 Nov 2023 23:02 UTC
46 points
10 comments10 min readLW link

Find­ing the variables

Stuart_Armstrong4 Mar 2019 19:37 UTC
30 points
1 comment4 min readLW link

Bridg­ing syn­tax and se­man­tics, empirically

Stuart_Armstrong19 Sep 2018 16:48 UTC
25 points
4 comments6 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 7 - Why is truth use­ful?

Gordon Seidoh Worley30 Apr 2023 16:48 UTC
10 points
3 comments10 min readLW link

Teleose­man­tics!

abramdemski23 Feb 2023 23:26 UTC
80 points
26 comments6 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 2 - Why do words have mean­ing?

Gordon Seidoh Worley18 Apr 2022 20:54 UTC
15 points
18 comments11 min readLW link

Three Fal­la­cies of Teleology

Eliezer Yudkowsky25 Aug 2008 22:27 UTC
36 points
14 comments9 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 6 - How can we be cer­tain about the truth?

Gordon Seidoh Worley6 Mar 2023 13:52 UTC
10 points
18 comments16 min readLW link

Maybe Ly­ing Can’t Ex­ist?!

Zack_M_Davis23 Aug 2020 0:36 UTC
58 points
16 comments5 min readLW link

Notes on Honesty

David Gross28 Oct 2020 0:54 UTC
46 points
6 comments18 min readLW link

Notes on Sincer­ity and such

David Gross1 Dec 2020 5:09 UTC
9 points
2 comments11 min readLW link

Ra­tion­al­ity: Ap­pre­ci­at­ing Cog­ni­tive Algorithms

Eliezer Yudkowsky6 Oct 2012 9:59 UTC
95 points
135 comments5 min readLW link

The whirlpool of reality

Gordon Seidoh Worley27 Sep 2020 2:36 UTC
9 points
2 comments2 min readLW link

Philos­o­phy of Num­bers (part 2)

Charlie Steiner19 Dec 2017 13:57 UTC
3 points
10 comments5 min readLW link

Can We Do Without Bridge Hy­pothe­ses?

Rob Bensinger25 Jan 2014 0:50 UTC
16 points
9 comments3 min readLW link

Build­ing Phenomenolog­i­cal Bridges

Rob Bensinger23 Dec 2013 19:57 UTC
94 points
115 comments11 min readLW link

Reductionism

Eliezer Yudkowsky16 Mar 2008 6:26 UTC
111 points
161 comments4 min readLW link

A Sketch of an Anti-Real­ist Metaethics

Jack22 Aug 2011 5:32 UTC
26 points
136 comments7 min readLW link

Leaky Concepts

Elo5 Mar 2019 22:01 UTC
20 points
2 comments2 min readLW link

On count­ing and addition

Anatoly_Vorobey9 Nov 2012 3:26 UTC
47 points
23 comments4 min readLW link

Sig­nal­ling & Simulacra

abramdemski14 Nov 2020 19:24 UTC
62 points
30 comments5 min readLW link

Map:Ter­ri­tory::Uncer­tainty::Ran­dom­ness – but that doesn’t mat­ter, value of in­for­ma­tion does.

Davidmanheim22 Jan 2016 19:12 UTC
8 points
21 comments3 min readLW link

Re­quest for Steel­man: Non-cor­re­spon­dence con­cepts of truth

PeerGynt24 Mar 2015 3:11 UTC
16 points
74 comments2 min readLW link

Truth and the Liar Paradox

casebash2 Sep 2014 2:05 UTC
5 points
45 comments4 min readLW link

Un­solved Prob­lems in Philos­o­phy Part 1: The Liar’s Paradox

Kevin30 Nov 2010 8:56 UTC
7 points
142 comments1 min readLW link

Mean­ings of Math­e­mat­i­cal Truths

prase5 Jun 2011 22:59 UTC
12 points
47 comments4 min readLW link

Why I Re­ject the Cor­re­spon­dence The­ory of Truth

pragmatist24 Mar 2015 11:00 UTC
26 points
30 comments8 min readLW link

Mixed Refer­ence: The Great Re­duc­tion­ist Project

Eliezer Yudkowsky5 Dec 2012 0:26 UTC
61 points
358 comments9 min readLW link

Map­ping ChatGPT’s on­tolog­i­cal land­scape, gra­di­ents and choices [in­ter­pretabil­ity]

Bill Benzon15 Oct 2023 20:12 UTC
1 point
0 comments18 min readLW link

Wittgen­stein and the Pri­vate Lan­guage Argument

TMFOW24 Mar 2024 20:06 UTC
3 points
0 comments14 min readLW link
(tmfow.substack.com)

ChatGPT defines 10 con­crete terms: gener­i­cally, for 5- and 11-year-olds, and for a sci­en­tist

Bill Benzon11 Apr 2024 20:27 UTC
3 points
9 comments6 min readLW link

Knowl­edge Base 7: Long-tail knowl­edge and col­lec­tive intelligence

iwis18 Apr 2024 14:21 UTC
−1 points
0 comments1 min readLW link

Why There Is No An­swer to Your Philo­soph­i­cal Question

Bryan Frances24 Mar 2023 23:22 UTC
−12 points
10 comments12 min readLW link

[Question] Is it cor­rect to frame al­ign­ment as “pro­gram­ming a good philos­o­phy of mean­ing”?

Util7 Apr 2023 23:16 UTC
2 points
3 comments1 min readLW link

Was Homer a stochas­tic par­rot? Mean­ing in liter­ary texts and LLMs

Bill Benzon13 Apr 2023 16:44 UTC
7 points
4 comments3 min readLW link

How Large Lan­guage Models Nuke our Naive No­tions of Truth and Reality

Sean Lee17 Apr 2023 18:08 UTC
0 points
23 comments11 min readLW link

Per­son­hood is a Reli­gious Belief

jan Sijan3 May 2023 16:16 UTC
−42 points
28 comments6 min readLW link

A sim­ple sketch of how re­al­ism be­came unpopular

Rob Bensinger11 Oct 2019 22:25 UTC
62 points
55 comments4 min readLW link

The Third Circle

Zvi21 May 2018 12:10 UTC
12 points
0 comments3 min readLW link
(thezvi.wordpress.com)

Log­i­cal Pinpointing

Eliezer Yudkowsky2 Nov 2012 15:33 UTC
128 points
345 comments10 min readLW link

Per­sonal ex­am­ples of se­man­tic stopsigns

Alexei6 Dec 2013 2:12 UTC
69 points
72 comments1 min readLW link

This Ter­ri­tory Does Not Exist

ike13 Aug 2020 0:30 UTC
7 points
197 comments7 min readLW link

An LLM-based “ex­em­plary ac­tor”

Roman Leventov29 May 2023 11:12 UTC
16 points
0 comments12 min readLW link

LessWrong: West vs. East

Neuroff19 Oct 2017 3:13 UTC
11 points
15 comments7 min readLW link

The Fabric of Real Things

Eliezer Yudkowsky12 Oct 2012 2:11 UTC
41 points
308 comments4 min readLW link

Philos­o­phy of Num­bers (part 1)

Charlie Steiner2 Dec 2017 18:20 UTC
11 points
14 comments3 min readLW link

Con­cep­tual Anal­y­sis for AI Align­ment

David Scott Krueger (formerly: capybaralet)30 Dec 2018 0:46 UTC
26 points
3 comments2 min readLW link

The Short Case for Verificationism

ike11 Sep 2020 18:48 UTC
6 points
57 comments1 min readLW link

Real Mean­ing of life has been found. Eliezer dis­cov­ered it in 2000′s.

Jorterder9 Aug 2023 18:13 UTC
−15 points
1 comment1 min readLW link
(docs.google.com)

Pur­pose and Pragmatism

Eliezer Yudkowsky26 Nov 2007 6:51 UTC
25 points
8 comments2 min readLW link

Math is Sub­junc­tively Objective

Eliezer Yudkowsky25 Jul 2008 11:06 UTC
40 points
118 comments8 min readLW link

Separate the truth from your wishes

Jacob G-W23 Aug 2023 0:52 UTC
6 points
3 comments1 min readLW link
(jacobgw.com)

Knowl­edge Base 2: The struc­ture and the method of building

iwis9 Oct 2023 11:53 UTC
2 points
4 comments8 min readLW link

Knowl­edge Base 6: Con­sen­sus the­ory of truth

iwis3 Nov 2023 13:56 UTC
−3 points
0 comments1 min readLW link

“Ar­bi­trary”

Eliezer Yudkowsky12 Aug 2008 17:55 UTC
19 points
14 comments4 min readLW link

Tarski State­ments as Ra­tion­al­ist Exercise

Vladimir_Nesov17 Mar 2009 19:47 UTC
12 points
10 comments4 min readLW link

Su­per­nat­u­ral Math

saturn19 May 2009 11:31 UTC
5 points
58 comments1 min readLW link

An­ime Ex­plains the Epi­menides Paradox

Eliezer Yudkowsky27 May 2009 21:12 UTC
4 points
29 comments1 min readLW link

Where is the Mean­ing?

Hazard22 Jul 2019 20:18 UTC
21 points
3 comments4 min readLW link

Un­der­stand­ing LLMs: Some ba­sic ob­ser­va­tions about words, syn­tax, and dis­course [w/​ a con­jec­ture about grokking]

Bill Benzon11 Oct 2023 19:13 UTC
5 points
0 comments5 min readLW link

Con­cep­tual co­her­ence for con­crete cat­e­gories in hu­mans and LLMs

Bill Benzon9 Dec 2023 23:49 UTC
13 points
1 comment2 min readLW link

Fit­ting­ness: Ra­tional suc­cess in con­cept formation

Polytopos10 Jan 2021 15:58 UTC
6 points
9 comments6 min readLW link

On the na­ture of pur­pose

Nora_Ammann22 Jan 2021 8:30 UTC
29 points
15 comments9 min readLW link

Ob­jec­tive truth?

pchvykov15 Feb 2021 21:47 UTC
1 point
0 comments2 min readLW link

Net­works of Meaning

Erich_Grunewald17 Apr 2021 7:30 UTC
21 points
1 comment10 min readLW link
(www.erichgrunewald.com)

Re­quest for com­ment on a novel refer­ence work of understanding

ender12 Aug 2021 0:06 UTC
3 points
0 comments9 min readLW link

How truth­ful is GPT-3? A bench­mark for lan­guage models

Owain_Evans16 Sep 2021 10:09 UTC
58 points
24 comments6 min readLW link

Book Re­view: All I Want To Know Is Where I’m Go­ing To Die So I’ll Never Go There

Anmoljain13 Oct 2021 3:46 UTC
3 points
2 comments46 min readLW link

Truth­ful AI: Devel­op­ing and gov­ern­ing AI that does not lie

18 Oct 2021 18:37 UTC
82 points
9 comments10 min readLW link

Truth­ful LMs as a warm-up for al­igned AGI

Jacob_Hilton17 Jan 2022 16:49 UTC
65 points
14 comments13 min readLW link

Col­lab­o­ra­tive Truth-Seeking

Gleb_Tsipursky4 May 2016 23:28 UTC
21 points
17 comments6 min readLW link

In­for­mal se­man­tics and Orders

Q Home27 Aug 2022 4:17 UTC
14 points
10 comments26 min readLW link

[Link] “Im­proper Nouns” by siderea

Kenny29 Sep 2022 13:28 UTC
17 points
3 comments1 min readLW link
(siderea.dreamwidth.org)

Truth-Seek­ing: Rea­son vs. Intuition

sakraf30 Sep 2022 12:12 UTC
4 points
7 comments4 min readLW link

Do we have the right kind of math for roles, goals and mean­ing?

mrcbarbier22 Oct 2022 21:28 UTC
13 points
5 comments7 min readLW link

A ba­sic lex­i­con of telic concepts

mrcbarbier22 Oct 2022 21:28 UTC
2 points
0 comments3 min readLW link

Mean­ingful things are those the uni­verse pos­sesses a se­man­tics for

Abhimanyu Pallavi Sudhir12 Dec 2022 16:03 UTC
16 points
14 comments14 min readLW link

Re­cent ad­vances in Nat­u­ral Lan­guage Pro­cess­ing—Some Woolly spec­u­la­tions (2019 es­say on se­man­tics and lan­guage mod­els)

philosophybear27 Dec 2022 2:11 UTC
1 point
0 comments7 min readLW link

Five Rea­sons to Lie

Dzoldzaya17 Jan 2023 16:53 UTC
0 points
19 comments3 min readLW link

The Lin­guis­tic Blind Spot of Value-Aligned Agency, Nat­u­ral and Ar­tifi­cial

Roman Leventov14 Feb 2023 6:57 UTC
6 points
0 comments2 min readLW link
(arxiv.org)

Search­ing for a model’s con­cepts by their shape – a the­o­ret­i­cal framework

23 Feb 2023 20:14 UTC
50 points
0 comments19 min readLW link

The is­sue of mean­ing in large lan­guage mod­els (LLMs)

Bill Benzon11 Mar 2023 23:00 UTC
1 point
34 comments8 min readLW link

Vec­tor se­man­tics and “Kubla Khan,” Part 2

Bill Benzon17 Mar 2023 16:32 UTC
2 points
0 comments3 min readLW link