RSS

Thane Ruthenis

Karma: 9,819

Agent-foundations researcher. Working on Synthesizing Standalone World-Models, aiming at a timely technical solution to the AGI risk fit for worlds where alignment is punishingly hard and we only get one try.

Currently looking for additional funders ($1k+, details). Consider reaching out if you’re interested, or donating directly.

Or get me to pay you money ($5-$100) by spotting holes in my agenda or providing other useful information.

Syn­the­siz­ing Stan­dalone World-Models, Part 4: Me­ta­phys­i­cal Justifications

Thane Ruthenis26 Sep 2025 18:00 UTC
23 points
9 comments4 min readLW link

Syn­the­siz­ing Stan­dalone World-Models, Part 3: Dataset-Assembly

Thane Ruthenis25 Sep 2025 19:21 UTC
13 points
2 comments2 min readLW link

Syn­the­siz­ing Stan­dalone World-Models, Part 2: Shift­ing Structures

Thane Ruthenis24 Sep 2025 19:02 UTC
16 points
5 comments10 min readLW link

Syn­the­siz­ing Stan­dalone World-Models, Part 1: Ab­strac­tion Hierarchies

Thane Ruthenis23 Sep 2025 17:01 UTC
25 points
10 comments23 min readLW link

Re­search Agenda: Syn­the­siz­ing Stan­dalone World-Models

Thane Ruthenis22 Sep 2025 19:06 UTC
79 points
33 comments11 min readLW link

The Sys­tem You De­ploy Is Not the Sys­tem You Design

Thane Ruthenis5 Sep 2025 20:16 UTC
53 points
0 comments5 min readLW link

Is Build­ing Good Note-Tak­ing Soft­ware an AGI-Com­plete Prob­lem?

Thane Ruthenis26 May 2025 18:26 UTC
27 points
13 comments7 min readLW link

A Bear Case: My Pre­dic­tions Re­gard­ing AI Progress

Thane Ruthenis5 Mar 2025 16:41 UTC
377 points
163 comments9 min readLW link

[Question] How Much Are LLMs Ac­tu­ally Boost­ing Real-World Pro­gram­mer Pro­duc­tivity?

Thane Ruthenis4 Mar 2025 16:23 UTC
141 points
52 comments3 min readLW link

The Sorry State of AI X-Risk Ad­vo­cacy, and Thoughts on Do­ing Better

Thane Ruthenis21 Feb 2025 20:15 UTC
157 points
53 comments6 min readLW link

Ab­stract Math­e­mat­i­cal Con­cepts vs. Ab­strac­tions Over Real-World Systems

Thane Ruthenis18 Feb 2025 18:04 UTC
35 points
10 comments4 min readLW link

[Question] Are You More Real If You’re Really For­get­ful?

Thane Ruthenis24 Nov 2024 19:30 UTC
40 points
30 comments5 min readLW link

Towards the Oper­a­tional­iza­tion of Philos­o­phy & Wisdom

Thane Ruthenis28 Oct 2024 19:45 UTC
20 points
2 comments33 min readLW link
(aiimpacts.org)

Thane Ruthe­nis’s Shortform

Thane Ruthenis13 Sep 2024 20:52 UTC
8 points
179 comments1 min readLW link

A Crisper Ex­pla­na­tion of Si­mu­lacrum Levels

Thane Ruthenis23 Dec 2023 22:13 UTC
92 points
13 comments13 min readLW link

Ideal­ized Agents Are Ap­prox­i­mate Causal Mir­rors (+ Rad­i­cal Op­ti­mism on Agent Foun­da­tions)

Thane Ruthenis22 Dec 2023 20:19 UTC
77 points
14 comments6 min readLW link

Most Peo­ple Don’t Real­ize We Have No Idea How Our AIs Work

Thane Ruthenis21 Dec 2023 20:02 UTC
159 points
42 comments1 min readLW link

How Would an Utopia-Max­i­mizer Look Like?

Thane Ruthenis20 Dec 2023 20:01 UTC
32 points
23 comments10 min readLW link

Don’t Share In­for­ma­tion Exfo­haz­ardous on Others’ AI-Risk Models

Thane Ruthenis19 Dec 2023 20:09 UTC
69 points
11 comments1 min readLW link

The Short­est Path Between Scylla and Charybdis

Thane Ruthenis18 Dec 2023 20:08 UTC
50 points
8 comments5 min readLW link