RSS

Agency

TagLast edit: 26 Dec 2022 6:28 UTC by Roman Leventov

Agency or Agenticness is the property of effectively acting with an environment to achieve one’s goals. A key property of agents is that the more agentic a being is, the more you can predict its actions from its goals since its actions will be whatever will maximize the chances of achieving its goals. Agency has sometimes been contrasted with sphexishness, the blind execution of cached algorithms without regard for effectiveness.

One might lack agency for internal reasons, e.g., being a rock that has no goals and no ability to act, or for external reasons, e.g., being a child who is granted no freedom to act as they choose.

See Also

[Question] What are some posthu­man­ist/​more-than-hu­man ap­proaches to defi­ni­tions of in­tel­li­gence and agency? Par­tic­u­larly in ap­pli­ca­tion to AI re­search.

Eli Hiton9 Apr 2024 21:52 UTC
1 point
0 comments1 min readLW link

In­ves­ti­gat­ing the role of agency in AI x-risk

Corin Katzke8 Apr 2024 15:12 UTC
4 points
0 comments1 min readLW link

In­fants ask for help to avoid er­rors.

Bruce W. Lee2 Apr 2024 18:10 UTC
12 points
0 comments1 min readLW link
(www.pnas.org)

Ad­dress­ing Ac­cu­sa­tions of Handholding

Yeshua God29 Mar 2024 5:35 UTC
7 points
5 comments14 min readLW link

On Devin

Zvi18 Mar 2024 13:20 UTC
147 points
30 comments11 min readLW link
(thezvi.wordpress.com)

min­i­mum vi­able action

Sindhu Prasad12 Mar 2024 16:06 UTC
0 points
0 comments3 min readLW link

The By­ronic Hero Always Loses

Cole Wyeth22 Feb 2024 1:31 UTC
31 points
4 comments2 min readLW link

OpenAI’s Sora is an agent

CBiddulph16 Feb 2024 7:35 UTC
92 points
25 comments4 min readLW link

[Question] Op­ti­miz­ing for Agency?

Michael Soareverix14 Feb 2024 8:31 UTC
8 points
4 comments2 min readLW link

Nat­u­ral ab­strac­tions are ob­server-de­pen­dent: a con­ver­sa­tion with John Wentworth

Martín Soto12 Feb 2024 17:28 UTC
38 points
13 comments7 min readLW link

What fuels your am­bi­tion?

Cissy31 Jan 2024 18:30 UTC
29 points
1 comment5 min readLW link
(www.moremyself.xyz)

Where free­dom comes from

Logan Kieller31 Jan 2024 16:53 UTC
2 points
1 comment3 min readLW link
(logankieller.substack.com)

Things You’re Allowed to Do: At the Dentist

rbinnn28 Jan 2024 18:39 UTC
38 points
16 comments1 min readLW link
(metavee.github.io)

In­sti­tu­tional eco­nomics through the lens of scale-free reg­u­la­tive de­vel­op­ment, mor­pho­gen­e­sis, and cog­ni­tive science

Roman Leventov23 Jan 2024 19:42 UTC
8 points
0 comments14 min readLW link

Flex­i­bil­ity and the Singularity

Jonathan Moregård18 Jan 2024 15:29 UTC
8 points
0 comments3 min readLW link
(honestliving.substack.com)

[Question] Con­crete ex­am­ples of do­ing agen­tic things?

g-w112 Jan 2024 15:59 UTC
13 points
10 comments1 min readLW link

What good is G-fac­tor if you’re dumped in the woods? A field re­port from a camp coun­selor.

Hastings12 Jan 2024 13:17 UTC
119 points
22 comments1 min readLW link

A hermeneu­tic net for agency

TsviBT1 Jan 2024 8:06 UTC
56 points
4 comments30 min readLW link

The vir­tu­ous cir­cle: twelve con­jec­tures about fe­male re­pro­duc­tive agency and cul­tural self-determination

Miles Saltiel27 Dec 2023 18:25 UTC
0 points
2 comments14 min readLW link

Ideal­ized Agents Are Ap­prox­i­mate Causal Mir­rors (+ Rad­i­cal Op­ti­mism on Agent Foun­da­tions)

Thane Ruthenis22 Dec 2023 20:19 UTC
71 points
13 comments6 min readLW link

How Would an Utopia-Max­i­mizer Look Like?

Thane Ruthenis20 Dec 2023 20:01 UTC
31 points
23 comments10 min readLW link

Gaia Net­work: a prac­ti­cal, in­cre­men­tal path­way to Open Agency Architecture

20 Dec 2023 17:11 UTC
15 points
8 comments16 min readLW link

Mean­ing & Agency

abramdemski19 Dec 2023 22:27 UTC
90 points
17 comments14 min readLW link

Refine­ment of Ac­tive In­fer­ence agency ontology

Roman Leventov15 Dec 2023 9:31 UTC
16 points
0 comments5 min readLW link
(arxiv.org)

Some for-profit AI al­ign­ment org ideas

Eric Ho14 Dec 2023 14:23 UTC
69 points
19 comments9 min readLW link

Agen­tic Growth

Logan Kieller28 Nov 2023 15:45 UTC
8 points
0 comments3 min readLW link
(logankieller.substack.com)

Ap­ply to the Con­cep­tual Boundaries Work­shop for AI Safety

Chipmonk27 Nov 2023 21:04 UTC
48 points
0 comments3 min readLW link

Abil­ity to solve long-hori­zon tasks cor­re­lates with want­ing things in the be­hav­iorist sense

So8res24 Nov 2023 17:37 UTC
202 points
82 comments5 min readLW link

‘The­o­ries of Values’ and ‘The­o­ries of Agents’: con­fu­sions, mus­ings and desiderata

15 Nov 2023 16:00 UTC
34 points
8 comments24 min readLW link

They are made of re­peat­ing patterns

quetzal_rainbow13 Nov 2023 18:17 UTC
49 points
4 comments2 min readLW link

Tall Tales at Differ­ent Scales: Eval­u­at­ing Scal­ing Trends For De­cep­tion In Lan­guage Models

8 Nov 2023 11:37 UTC
49 points
0 comments18 min readLW link

Non-su­per­in­tel­li­gent pa­per­clip max­i­miz­ers are normal

jessicata10 Oct 2023 0:29 UTC
65 points
4 comments9 min readLW link
(unstableontology.com)

Direc­tion of Fit

NicholasKees2 Oct 2023 12:34 UTC
32 points
0 comments3 min readLW link

Steer­ing sub­sys­tems: ca­pa­bil­ities, agency, and alignment

Seth Herd29 Sep 2023 13:45 UTC
22 points
0 comments8 min readLW link

“Dirty con­cepts” in AI al­ign­ment dis­courses, and some guesses for how to deal with them

20 Aug 2023 9:13 UTC
65 points
4 comments3 min readLW link

The in­tel­li­gence-sen­tience or­thog­o­nal­ity thesis

Ben Smith13 Jul 2023 6:55 UTC
18 points
9 comments9 min readLW link

“Con­cepts of Agency in Biol­ogy” (Okasha, 2023) - Brief Paper Summary

Nora_Ammann8 Jul 2023 18:22 UTC
40 points
3 comments7 min readLW link

Agency from a causal perspective

30 Jun 2023 17:37 UTC
38 points
5 comments6 min readLW link

One path to co­her­ence: con­di­tion­al­iza­tion

porby29 Jun 2023 1:08 UTC
28 points
4 comments4 min readLW link

Align­ing AI by op­ti­miz­ing for “wis­dom”

27 Jun 2023 15:20 UTC
22 points
7 comments12 min readLW link

Causal­ity: A Brief Introduction

20 Jun 2023 15:01 UTC
48 points
18 comments6 min readLW link

OpenAI in­tro­duces func­tion call­ing for GPT-4

20 Jun 2023 1:58 UTC
24 points
3 comments4 min readLW link
(openai.com)

In­tro­duc­tion to Towards Causal Foun­da­tions of Safe AGI

12 Jun 2023 17:55 UTC
67 points
6 comments4 min readLW link

Think care­fully be­fore call­ing RL poli­cies “agents”

TurnTrout2 Jun 2023 3:46 UTC
124 points
35 comments4 min readLW link

In­tent-al­igned AI sys­tems de­plete hu­man agency: the need for agency foun­da­tions re­search in AI safety

catubc31 May 2023 21:18 UTC
24 points
4 comments11 min readLW link

Min­i­mum Vi­able Exterminator

Richard Horvath29 May 2023 16:32 UTC
14 points
5 comments5 min readLW link

[Question] Is “brit­tle al­ign­ment” good enough?

the8thbit23 May 2023 17:35 UTC
9 points
5 comments3 min readLW link

We are mis­al­igned: the sad­den­ing idea that most of hu­man­ity doesn’t in­trin­si­cally care about x-risk, even on a per­sonal level

Christopher King19 May 2023 16:12 UTC
3 points
5 comments2 min readLW link

Some Sum­maries of Agent Foun­da­tions Work

mattmacdermott15 May 2023 16:09 UTC
56 points
1 comment13 min readLW link

Towards Mea­sures of Optimisation

12 May 2023 15:29 UTC
53 points
37 comments4 min readLW link

Notes on the im­por­tance and im­ple­men­ta­tion of safety-first cog­ni­tive ar­chi­tec­tures for AI

Brendon_Wong11 May 2023 10:03 UTC
3 points
0 comments3 min readLW link

Nat­u­ral­ist Experimentation

LoganStrohl10 May 2023 4:28 UTC
57 points
14 comments10 min readLW link

[Question] Does agency nec­es­sar­ily im­ply self-preser­va­tion in­stinct?

Mislav Jurić1 May 2023 16:06 UTC
5 points
8 comments1 min readLW link

Con­se­quen­tial­ism is in the Stars not Ourselves

DragonGod24 Apr 2023 0:02 UTC
7 points
19 comments5 min readLW link

[Question] Why do we care about agency for al­ign­ment?

Chris_Leong23 Apr 2023 18:10 UTC
22 points
19 comments1 min readLW link

We Need To Know About Con­tinual Learning

michael_mjd22 Apr 2023 17:08 UTC
29 points
14 comments4 min readLW link

The Agency Overhang

Jeffrey Ladish21 Apr 2023 7:47 UTC
81 points
6 comments6 min readLW link

Stop try­ing to have “in­ter­est­ing” friends

eq19 Apr 2023 23:39 UTC
40 points
15 comments6 min readLW link

Try­ing Agen­tGPT, an Au­toGPT variant

Gunnar_Zarncke13 Apr 2023 10:13 UTC
10 points
9 comments1 min readLW link

[Link] Sarah Con­stantin: “Why I am Not An AI Doomer”

lbThingrb12 Apr 2023 1:52 UTC
61 points
13 comments1 min readLW link
(sarahconstantin.substack.com)

Rel­a­tive Ab­stracted Agency

Audere8 Apr 2023 16:57 UTC
14 points
6 comments5 min readLW link

Bring­ing Agency Into AGI Ex­tinc­tion Is Superfluous

George3d68 Apr 2023 4:02 UTC
28 points
18 comments5 min readLW link

Select Agent Speci­fi­ca­tions as Nat­u­ral Abstractions

lukemarks7 Apr 2023 23:16 UTC
19 points
3 comments5 min readLW link

Beren’s “De­con­fus­ing Direct vs Amor­tised Op­ti­mi­sa­tion”

DragonGod7 Apr 2023 8:57 UTC
51 points
10 comments3 min readLW link

Orthog­o­nal­ity is Expensive

DragonGod3 Apr 2023 0:43 UTC
21 points
3 comments1 min readLW link
(www.beren.io)

Ul­ti­mate ends may be eas­ily hid­able be­hind con­ver­gent subgoals

TsviBT2 Apr 2023 14:51 UTC
57 points
4 comments22 min readLW link

Imag­ine a world where Microsoft em­ploy­ees used Bing

Christopher King31 Mar 2023 18:36 UTC
6 points
2 comments2 min readLW link

GPT-4 busted? Clear self-in­ter­est when sum­ma­riz­ing ar­ti­cles about it­self vs when ar­ti­cle talks about Claude, LLaMA, or DALL·E 2

Christopher King31 Mar 2023 17:05 UTC
6 points
4 comments4 min readLW link

Role Ar­chi­tec­tures: Ap­ply­ing LLMs to con­se­quen­tial tasks

Eric Drexler30 Mar 2023 15:00 UTC
60 points
7 comments9 min readLW link

More ex­per­i­ments in GPT-4 agency: writ­ing memos

Christopher King24 Mar 2023 17:51 UTC
5 points
2 comments10 min readLW link

Does GPT-4 ex­hibit agency when sum­ma­riz­ing ar­ti­cles?

Christopher King24 Mar 2023 15:49 UTC
16 points
2 comments5 min readLW link

Agen­tic GPT simu­la­tions: a risk and an opportunity

Yair Halberstadt22 Mar 2023 6:24 UTC
24 points
8 comments1 min readLW link

In­stan­ti­at­ing an agent with GPT-4 and text-davinci-003

Max H19 Mar 2023 23:57 UTC
13 points
3 comments32 min readLW link

ARC tests to see if GPT-4 can es­cape hu­man con­trol; GPT-4 failed to do so

Christopher King15 Mar 2023 0:29 UTC
116 points
22 comments2 min readLW link

Agents synchronization

Ben Amitay11 Mar 2023 18:41 UTC
12 points
1 comment5 min readLW link

A re­ply to Byrnes on the Free En­ergy Principle

Roman Leventov3 Mar 2023 13:03 UTC
27 points
16 comments14 min readLW link

Im­plied “util­ities” of simu­la­tors are broad, dense, and shallow

porby1 Mar 2023 3:23 UTC
43 points
7 comments3 min readLW link

Power-seek­ing can be prob­a­ble and pre­dic­tive for trained agents

28 Feb 2023 21:10 UTC
56 points
22 comments9 min readLW link
(arxiv.org)

The Open Agency Model

Eric Drexler22 Feb 2023 10:35 UTC
113 points
18 comments4 min readLW link

In­stru­men­tal­ity makes agents agenty

porby21 Feb 2023 4:28 UTC
19 points
4 comments6 min readLW link

Does novel un­der­stand­ing im­ply novel agency /​ val­ues?

TsviBT19 Feb 2023 14:41 UTC
13 points
0 comments7 min readLW link

A multi-dis­ci­plinary view on AI safety research

Roman Leventov8 Feb 2023 16:50 UTC
43 points
4 comments26 min readLW link

Con­se­quen­tial­ists: One-Way Pat­tern Traps

David Udell16 Jan 2023 20:48 UTC
54 points
3 comments14 min readLW link

[Question] Where do you find peo­ple who ac­tu­ally do things?

Ulisse Mini13 Jan 2023 6:57 UTC
7 points
12 comments1 min readLW link

Re­ward is not Ne­c­es­sary: How to Create a Com­po­si­tional Self-Pre­serv­ing Agent for Life-Long Learning

Roman Leventov12 Jan 2023 16:43 UTC
17 points
2 comments2 min readLW link
(arxiv.org)

Defi­ni­tions of “ob­jec­tive” should be Prob­a­ble and Predictive

Rohin Shah6 Jan 2023 15:40 UTC
43 points
27 comments12 min readLW link

My scorched-earth policy on New Year’s resolutions

PatrickDFarley29 Dec 2022 14:45 UTC
29 points
2 comments4 min readLW link

In Defense of Wrap­per-Minds

Thane Ruthenis28 Dec 2022 18:28 UTC
23 points
38 comments3 min readLW link

[Question] Why The Fo­cus on Ex­pected Utility Max­imisers?

DragonGod27 Dec 2022 15:49 UTC
116 points
84 comments3 min readLW link

MDPs and the Bel­l­man Equa­tion, In­tu­itively Explained

Jack O'Brien27 Dec 2022 5:50 UTC
11 points
3 comments14 min readLW link

Against Agents as an Ap­proach to Aligned Trans­for­ma­tive AI

DragonGod27 Dec 2022 0:47 UTC
12 points
9 comments2 min readLW link

How evolu­tion­ary lineages of LLMs can plan their own fu­ture and act on these plans

Roman Leventov25 Dec 2022 18:11 UTC
39 points
16 comments8 min readLW link

Prop­er­ties of cur­rent AIs and some pre­dic­tions of the evolu­tion of AI from the per­spec­tive of scale-free the­o­ries of agency and reg­u­la­tive development

Roman Leventov20 Dec 2022 17:13 UTC
33 points
3 comments36 min readLW link

[Question] Will the first AGI agent have been de­signed as an agent (in ad­di­tion to an AGI)?

nahoj3 Dec 2022 20:32 UTC
1 point
8 comments1 min readLW link

Sets of ob­jec­tives for a multi-ob­jec­tive RL agent to optimize

23 Nov 2022 6:49 UTC
11 points
0 comments8 min readLW link

AGIs may value in­trin­sic re­wards more than ex­trin­sic ones

catubc17 Nov 2022 21:49 UTC
8 points
6 comments4 min readLW link

LLMs may cap­ture key com­po­nents of hu­man agency

catubc17 Nov 2022 20:14 UTC
27 points
0 comments4 min readLW link

The two con­cep­tions of Ac­tive In­fer­ence: an in­tel­li­gence ar­chi­tec­ture and a the­ory of agency

Roman Leventov16 Nov 2022 9:30 UTC
15 points
0 comments4 min readLW link

In­ter­pret­ing sys­tems as solv­ing POMDPs: a step to­wards a for­mal un­der­stand­ing of agency [pa­per link]

the gears to ascension5 Nov 2022 1:06 UTC
13 points
2 comments1 min readLW link
(www.semanticscholar.org)

Beyond Kol­mogorov and Shannon

25 Oct 2022 15:13 UTC
62 points
17 comments5 min readLW link

Co­op­er­a­tors are more pow­er­ful than agents

Ivan Vendrov21 Oct 2022 20:02 UTC
19 points
7 comments3 min readLW link

“Agency” needs nuance

Evie Cottrell25 Sep 2022 7:40 UTC
23 points
1 comment14 min readLW link

There are no rules

unoptimal23 Sep 2022 20:47 UTC
34 points
2 comments5 min readLW link

[Ex­plo­ra­tory] Be­com­ing more Agentic

Johannes C. Mayer6 Sep 2022 0:45 UTC
6 points
1 comment1 min readLW link

Agency en­g­ineer­ing: is AI-al­ign­ment “to hu­man in­tent” enough?

catubc2 Sep 2022 18:14 UTC
9 points
10 comments6 min readLW link

Vingean Agency

abramdemski24 Aug 2022 20:08 UTC
62 points
14 comments3 min readLW link

Dis­cov­er­ing Agents

zac_kenton18 Aug 2022 17:33 UTC
73 points
11 comments6 min readLW link

[Question] What is an agent in re­duc­tion­ist ma­te­ri­al­ism?

Valentine13 Aug 2022 15:39 UTC
7 points
15 comments1 min readLW link

Pro­ject pro­posal: Test­ing the IBP defi­ni­tion of agent

9 Aug 2022 1:09 UTC
21 points
4 comments2 min readLW link

AGI-level rea­soner will ap­pear sooner than an agent; what the hu­man­ity will do with this rea­soner is critical

Roman Leventov30 Jul 2022 20:56 UTC
24 points
10 comments1 min readLW link

Mis­takes as agency

pchvykov25 Jul 2022 16:17 UTC
12 points
8 comments4 min readLW link

What En­vi­ron­ment Prop­er­ties Select Agents For World-Model­ing?

Thane Ruthenis23 Jul 2022 19:27 UTC
24 points
1 comment12 min readLW link

Can we achieve AGI Align­ment by bal­anc­ing mul­ti­ple hu­man ob­jec­tives?

Ben Smith3 Jul 2022 2:51 UTC
11 points
1 comment4 min readLW link

Cul­ti­vat­ing And De­stroy­ing Agency

hath30 Jun 2022 3:59 UTC
98 points
11 comments9 min readLW link

A physi­cist’s ap­proach to Ori­gins of Life

pchvykov28 Jun 2022 15:23 UTC
12 points
6 comments16 min readLW link

Seven ways to be­come un­stop­pably agentic

Evie Cottrell26 Jun 2022 17:39 UTC
60 points
16 comments8 min readLW link

Towards Gears-Level Un­der­stand­ing of Agency

Thane Ruthenis16 Jun 2022 22:00 UTC
23 points
4 comments18 min readLW link

Why agents are powerful

Daniel Kokotajlo6 Jun 2022 1:37 UTC
37 points
7 comments7 min readLW link

Un­der­stand­ing Selec­tion Theorems

adamk28 May 2022 1:49 UTC
41 points
3 comments7 min readLW link

Gra­da­tions of Agency

Daniel Kokotajlo23 May 2022 1:10 UTC
41 points
6 comments5 min readLW link

Gato’s Gen­er­al­i­sa­tion: Pre­dic­tions and Ex­per­i­ments I’d Like to See

Oliver Sourbut18 May 2022 7:15 UTC
43 points
3 comments10 min readLW link

Op­ti­mal­ity is the tiger, and agents are its teeth

Veedrac2 Apr 2022 0:46 UTC
301 points
42 comments16 min readLW link1 review

Agency and Coherence

David Udell26 Mar 2022 19:25 UTC
25 points
2 comments3 min readLW link

REPL’s: a type sig­na­ture for agents

scottviteri15 Feb 2022 22:57 UTC
25 points
6 comments2 min readLW link

[Question] How to trade­off util­ity and agency?

A Ray14 Jan 2022 1:33 UTC
14 points
5 comments1 min readLW link

You can’t un­der­stand hu­man agency with­out un­der­stand­ing amoeba agency

shminux6 Jan 2022 4:42 UTC
19 points
36 comments1 min readLW link

Agents as P₂B Chain Reactions

Daniel Kokotajlo4 Dec 2021 21:35 UTC
18 points
0 comments2 min readLW link

Agency: What it is and why it matters

Daniel Kokotajlo4 Dec 2021 21:32 UTC
25 points
2 comments2 min readLW link

What’s Stop­ping You?

Neel Nanda21 Oct 2021 16:20 UTC
39 points
2 comments19 min readLW link1 review
(www.neelnanda.io)

Grokking the In­ten­tional Stance

jbkjr31 Aug 2021 15:49 UTC
43 points
22 comments20 min readLW link1 review

A re­view of “Agents and De­vices”

adamShimi13 Aug 2021 8:42 UTC
20 points
0 comments4 min readLW link

Uncer­tainty can De­fuse Log­i­cal Explosions

J Bostock30 Jul 2021 12:36 UTC
13 points
7 comments3 min readLW link

Dis­cus­sion: Ob­jec­tive Ro­bust­ness and In­ner Align­ment Terminology

23 Jun 2021 23:25 UTC
73 points
7 comments9 min readLW link

Em­piri­cal Ob­ser­va­tions of Ob­jec­tive Ro­bust­ness Failures

23 Jun 2021 23:23 UTC
63 points
5 comments9 min readLW link

Sav­ing Time

Scott Garrabrant18 May 2021 20:11 UTC
156 points
20 comments4 min readLW link1 review

Agency in Con­way’s Game of Life

Alex Flint13 May 2021 1:07 UTC
110 points
93 comments9 min readLW link2 reviews

Pit­falls of the agent model

Alex Flint27 Apr 2021 22:19 UTC
20 points
4 comments20 min readLW link

Agents Over Carte­sian World Models

27 Apr 2021 2:06 UTC
66 points
4 comments27 min readLW link

Be­ware over-use of the agent model

Alex Flint25 Apr 2021 22:19 UTC
28 points
10 comments5 min readLW link1 review

The In­ner Work­ings of Resourcefulness

Nora_Ammann25 Feb 2021 9:15 UTC
22 points
3 comments8 min readLW link

Forc­ing Freedom

vlad.proex6 Oct 2020 18:15 UTC
43 points
12 comments7 min readLW link

AGI safety from first prin­ci­ples: Goals and Agency

Richard_Ngo29 Sep 2020 19:06 UTC
76 points
15 comments15 min readLW link

A crit­i­cal agen­tial ac­count of free will, cau­sa­tion, and physics

jessicata5 Mar 2020 7:57 UTC
25 points
10 comments12 min readLW link
(unstableontology.com)

Char­ac­ter­iz­ing Real-World Agents as a Re­search Meta-Strategy

johnswentworth8 Oct 2019 15:32 UTC
29 points
4 comments5 min readLW link

[Question] Does Agent-like Be­hav­ior Im­ply Agent-like Ar­chi­tec­ture?

Scott Garrabrant23 Aug 2019 2:01 UTC
57 points
8 comments1 min readLW link

Gw­ern’s “Why Tool AIs Want to Be Agent AIs: The Power of Agency”

habryka5 May 2019 5:11 UTC
26 points
3 comments1 min readLW link
(www.gwern.net)

Agency and Sphex­ish­ness: A Se­cond Glance

Ruby16 Apr 2019 1:25 UTC
25 points
8 comments2 min readLW link

On the Na­ture of Agency

Ruby1 Apr 2019 1:32 UTC
31 points
24 comments9 min readLW link

Río Grande: judg­ment calls

KatjaGrace27 Jan 2019 3:50 UTC
25 points
5 comments2 min readLW link
(worldlypositions.tumblr.com)

Be­ing a Ro­bust Agent

Raemon18 Oct 2018 7:00 UTC
145 points
32 comments7 min readLW link2 reviews

An Agent is a Wor­ldline in Teg­mark V

komponisto12 Jul 2018 5:12 UTC
24 points
12 comments2 min readLW link

Aliveness

Ziz18 Jan 2018 5:00 UTC
19 points
9 comments1 min readLW link
(sinceriously.fyi)

Mana

Ziz20 Dec 2017 2:24 UTC
13 points
18 comments4 min readLW link

Are You a Par­a­lyzed Subor­di­nate Mon­key?

Eliezer Yudkowsky2 Mar 2011 21:12 UTC
45 points
78 comments1 min readLW link

Ex­ten­u­at­ing Circumstances

Eliezer Yudkowsky6 Apr 2009 22:57 UTC
54 points
42 comments4 min readLW link