Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Definitions
Tag
Last edit:
31 Jul 2020 6:48 UTC
by
Yoav Ravid
Posts that attempt to
Define
or clarify the meaning of a concept, a word, phrase, or something else.
Relevant
New
Old
Broadly human level, cognitively complete AGI
p.b.
6 Aug 2024 9:26 UTC
7
points
0
comments
1
min read
LW
link
Superposition is not “just” neuron polysemanticity
LawrenceC
26 Apr 2024 23:22 UTC
64
points
4
comments
13
min read
LW
link
What Do We Mean By “Rationality”?
Eliezer Yudkowsky
16 Mar 2009 22:33 UTC
360
points
18
comments
6
min read
LW
link
Compact vs. Wide Models
Vaniver
16 Jul 2018 4:09 UTC
31
points
5
comments
3
min read
LW
link
What is autonomy, and how does it lead to greater risk from AI?
Davidmanheim
1 Aug 2023 7:58 UTC
30
points
0
comments
6
min read
LW
link
What’s So Bad About Ad-Hoc Mathematical Definitions?
johnswentworth
15 Mar 2021 21:51 UTC
200
points
58
comments
7
min read
LW
link
“AI Alignment” is a Dangerously Overloaded Term
Roko
15 Dec 2023 14:34 UTC
108
points
100
comments
3
min read
LW
link
What Does “Signalling” Mean?
abramdemski
16 Sep 2020 21:19 UTC
38
points
18
comments
3
min read
LW
link
When to use “meta” vs “self-reference”, “recursive”, etc.
Alex_Altair
6 Apr 2022 4:57 UTC
20
points
5
comments
5
min read
LW
link
The Problem With The Current State of AGI Definitions
Yitz
29 May 2022 13:58 UTC
40
points
22
comments
8
min read
LW
link
Logical Foundations of Government Policy
FCCC
10 Oct 2020 17:05 UTC
2
points
0
comments
17
min read
LW
link
Behavioral and mechanistic definitions (often confuse AI alignment discussions)
LawrenceC
20 Feb 2023 21:33 UTC
33
points
5
comments
6
min read
LW
link
Essential Behaviorism Terms
Rivka
17 Mar 2023 17:41 UTC
15
points
1
comment
10
min read
LW
link
What Boston Can Teach Us About What a Woman Is
ymeskhout
1 May 2023 15:34 UTC
18
points
45
comments
12
min read
LW
link
A point of clarification on infohazard terminology
eukaryote
2 Feb 2020 17:43 UTC
50
points
21
comments
2
min read
LW
link
(eukaryotewritesblog.com)
Personhood is a Religious Belief
jan Sijan
3 May 2023 16:16 UTC
−42
points
28
comments
6
min read
LW
link
Disambiguating “alignment” and related notions
David Scott Krueger (formerly: capybaralet)
5 Jun 2018 15:35 UTC
22
points
21
comments
2
min read
LW
link
Four kinds of problems
jacobjacob
21 Aug 2018 23:01 UTC
35
points
11
comments
3
min read
LW
link
Note on Terminology: “Rationality”, not “Rationalism”
Vladimir_Nesov
14 Jan 2011 21:21 UTC
40
points
51
comments
1
min read
LW
link
An Agent is a Worldline in Tegmark V
komponisto
12 Jul 2018 5:12 UTC
24
points
12
comments
2
min read
LW
link
Flowsheet Logic and Notecard Logic
moridinamael
9 Sep 2015 16:42 UTC
46
points
28
comments
3
min read
LW
link
In favor of tabooing the word “values” and using only “priorities” instead
chaosmage
25 Oct 2018 22:28 UTC
21
points
11
comments
2
min read
LW
link
Definitions are about efficiency and consistency with common language.
Nacruno96
10 Jul 2023 23:46 UTC
1
point
0
comments
4
min read
LW
link
Spaghetti Towers
eukaryote
22 Dec 2018 5:29 UTC
202
points
36
comments
3
min read
LW
link
1
review
(eukaryotewritesblog.com)
Technical model refinement formalism
Stuart_Armstrong
27 Aug 2020 11:54 UTC
19
points
0
comments
6
min read
LW
link
Glossary of Futurology
mind_bomber
21 Aug 2015 5:51 UTC
3
points
7
comments
13
min read
LW
link
Define Rationality
Marshall
5 Mar 2009 18:25 UTC
1
point
14
comments
1
min read
LW
link
Seeking better name for “Effective Egoism”
DataPacRat
25 Nov 2016 22:31 UTC
14
points
31
comments
1
min read
LW
link
[Question]
What Belongs in my Glossary?
Zvi
2 Nov 2020 19:52 UTC
14
points
8
comments
1
min read
LW
link
Beginning at the Beginning
Annoyance
11 Mar 2009 19:23 UTC
5
points
60
comments
4
min read
LW
link
What’s in a name? That which we call a rationalist…
badger
24 Apr 2009 23:53 UTC
8
points
92
comments
1
min read
LW
link
Rationality is winning—or is it?
taw
7 May 2009 14:51 UTC
−8
points
10
comments
1
min read
LW
link
My concerns about the term ‘rationalist’
JamesCole
4 Jun 2009 15:31 UTC
12
points
34
comments
2
min read
LW
link
The two meanings of mathematical terms
JamesCole
15 Jun 2009 14:30 UTC
0
points
80
comments
3
min read
LW
link
ESR’s comments on some EY:OB/LW posts
Eliezer Yudkowsky
20 Jun 2009 0:16 UTC
7
points
16
comments
1
min read
LW
link
Rationalists, Post-Rationalists, And Rationalist-Adjacents
orthonormal
13 Mar 2020 20:25 UTC
79
points
43
comments
3
min read
LW
link
Genocide isn’t Decolonization
robotelvis
20 Oct 2023 4:14 UTC
33
points
19
comments
5
min read
LW
link
(messyprogress.substack.com)
Sentience, Sapience, Consciousness & Self-Awareness: Defining Complex Terms
LukeOnline
20 Oct 2021 13:48 UTC
10
points
8
comments
4
min read
LW
link
Disentangling inner alignment failures
Erik Jenner
10 Oct 2022 18:50 UTC
23
points
5
comments
4
min read
LW
link
Rationality Is Expertise With Reality
Horatio Von Becker
2 Sep 2021 20:45 UTC
−14
points
15
comments
1
min read
LW
link
[Question]
Formal “left” and “right” definitions
Roman Nastenko
30 Mar 2024 23:42 UTC
1
point
1
comment
1
min read
LW
link
[Question]
What are some posthumanist/more-than-human approaches to definitions of intelligence and agency? Particularly in application to AI research.
Eli Hiton
9 Apr 2024 21:52 UTC
1
point
0
comments
1
min read
LW
link
ChatGPT defines 10 concrete terms: generically, for 5- and 11-year-olds, and for a scientist
Bill Benzon
11 Apr 2024 20:27 UTC
3
points
9
comments
6
min read
LW
link
“Open Source AI” is a lie, but it doesn’t have to be
jacobhaimes
30 Apr 2024 23:10 UTC
18
points
5
comments
6
min read
LW
link
(jacob-haimes.github.io)
Let’s Talk About Emergence
jacobhaimes
7 Jun 2024 19:18 UTC
4
points
0
comments
7
min read
LW
link
(www.odysseaninstitute.org)
Relationships among words, metalingual definition, and interpretability
Bill Benzon
7 Jun 2024 19:18 UTC
2
points
0
comments
5
min read
LW
link
What is space? What is time?
Tahp
7 Jun 2024 22:15 UTC
8
points
3
comments
7
min read
LW
link
Common Uses of “Acceptance”
Yi-Yang
26 Jul 2024 11:18 UTC
9
points
5
comments
24
min read
LW
link
Clarifying Alignment Fundamentals Through the Lens of Ontology
eternal/ephemera
7 Oct 2024 20:57 UTC
1
point
0
comments
24
min read
LW
link
Why There Is No Answer to Your Philosophical Question
Bryan Frances
24 Mar 2023 23:22 UTC
−12
points
10
comments
12
min read
LW
link
The Useful Idea of Truth
Eliezer Yudkowsky
2 Oct 2012 18:16 UTC
183
points
544
comments
14
min read
LW
link
No comments.
Back to top