RSS

Logan Zoellner

Karma: 1,126

[Question] Many Tur­ing Machines

Logan Zoellner10 Dec 2019 17:36 UTC
−5 points
9 comments2 min readLW link

[Question] Why is the mail so much bet­ter than the DMV?

Logan Zoellner29 Dec 2019 18:55 UTC
30 points
13 comments2 min readLW link

[Question] AI Box­ing for Hard­ware-bound agents (aka the China al­ign­ment prob­lem)

Logan Zoellner8 May 2020 15:50 UTC
11 points
27 comments10 min readLW link

[Question] Liber­tar­i­anism, Ne­oliber­al­ism and Med­i­care for All?

Logan Zoellner14 Oct 2020 21:06 UTC
5 points
13 comments2 min readLW link

[Question] (Pseudo) Math­e­mat­i­cal Real­ism Bad?

Logan Zoellner22 Nov 2020 18:21 UTC
0 points
6 comments2 min readLW link

[Question] TAI?

Logan Zoellner30 Mar 2021 12:41 UTC
10 points
8 comments1 min readLW link

Against Against Boredom

Logan Zoellner16 May 2021 18:19 UTC
5 points
8 comments2 min readLW link

The Walk­ing Dead

Logan Zoellner22 Jul 2021 16:19 UTC
22 points
2 comments1 min readLW link

[Question] How much should you be will­ing to pay for an AGI?

Logan Zoellner20 Sep 2021 11:51 UTC
11 points
5 comments1 min readLW link

AGI is at least as far away as Nu­clear Fu­sion.

Logan Zoellner11 Nov 2021 21:33 UTC
0 points
8 comments1 min readLW link

[Question] Does the Struc­ture of an al­gorithm mat­ter for AI Risk and/​or con­scious­ness?

Logan Zoellner3 Dec 2021 18:31 UTC
7 points
4 comments1 min readLW link

[Question] How con­fi­dent are we that there are no Ex­tremely Ob­vi­ous Aliens?

Logan Zoellner1 May 2022 10:59 UTC
60 points
25 comments1 min readLW link

Var­i­ous Align­ment Strate­gies (and how likely they are to work)

Logan Zoellner3 May 2022 16:54 UTC
83 points
34 comments11 min readLW link

The Last Paperclip

Logan Zoellner12 May 2022 19:25 UTC
61 points
15 comments17 min readLW link

An Agent Based Con­scious­ness Model (un­for­tu­nately it’s not com­putable)

Logan Zoellner21 May 2022 23:00 UTC
6 points
2 comments8 min readLW link

Bureau­cracy of AIs

Logan Zoellner9 Jun 2022 23:03 UTC
17 points
6 comments14 min readLW link

A De­cep­tively Sim­ple Ar­gu­ment in fa­vor of Prob­lem Factorization

Logan Zoellner6 Aug 2022 17:32 UTC
3 points
4 comments1 min readLW link

[Question] What is the “Less Wrong” ap­proved acronym for 1984-risk?

Logan Zoellner10 Sep 2022 14:38 UTC
5 points
8 comments1 min readLW link

Nat­u­ral Cat­e­gories Update

Logan Zoellner10 Oct 2022 15:19 UTC
33 points
6 comments2 min readLW link

2022 was the year AGI ar­rived (Just don’t call it that)

Logan Zoellner4 Jan 2023 15:19 UTC
101 points
59 comments3 min readLW link