Karma: 1,918

I operate by Crocker’s rules.

I try to not make people regret telling me things. So in particular:
- I expect to be safe to ask if your post would give AI labs dangerous ideas.
- If you worry I’ll produce such posts, I’ll try to keep your worry from making them more likely even if I disagree. Not thinking there will be easier if you don’t spell it out in the initial contact.

A Brief The­ol­ogy of D&D

Gurkenglas1 Apr 2022 12:47 UTC
22 points
2 comments2 min readLW link

Would you like me to de­bug your math?

Gurkenglas11 Jun 2021 10:54 UTC
58 points
16 comments1 min readLW link

Do­main The­ory and the Pri­soner’s Dilemma: FairBot

Gurkenglas7 May 2021 7:33 UTC
14 points
5 comments2 min readLW link

Chang­ing the AI race pay­off matrix

Gurkenglas22 Nov 2020 22:25 UTC
7 points
2 comments1 min readLW link

Us­ing GPT-N to Solve In­ter­pretabil­ity of Neu­ral Net­works: A Re­search Agenda

3 Sep 2020 18:27 UTC
67 points
11 comments2 min readLW link

Map­ping Out Alignment

15 Aug 2020 1:02 UTC
43 points
0 comments5 min readLW link

[Question] What are some good pub­lic con­tri­bu­tion op­por­tu­ni­ties? (100$ bounty)

Gurkenglas18 Jun 2020 14:47 UTC
18 points
1 comment1 min readLW link

Gurken­glas’s Shortform

Gurkenglas4 Aug 2019 18:46 UTC
5 points
27 comments1 min readLW link

Im­pli­ca­tions of GPT-2

Gurkenglas18 Feb 2019 10:57 UTC
35 points
28 comments1 min readLW link

[Question] What shape has mindspace?

Gurkenglas11 Jan 2019 16:28 UTC
16 points
6 comments1 min readLW link

A sim­ple ap­proach to 5-and-10

Gurkenglas17 Dec 2018 18:33 UTC
5 points
10 comments1 min readLW link

Quan­tum AI Goal

Gurkenglas8 Jun 2018 16:55 UTC
−1 points
5 comments1 min readLW link

Quan­tum AI Box

Gurkenglas8 Jun 2018 16:20 UTC
4 points
15 comments1 min readLW link

A line of defense against un­friendly out­comes: Grover’s Algorithm

Gurkenglas5 Jun 2018 0:59 UTC
2 points
0 comments3 min readLW link