RSS

Wei_Dai

Karma: 28,124 (LW), 351 (AF)
Page 1

Please use real names, es­pe­cially for Align­ment Fo­rum?

Wei_Dai
29 Mar 2019 2:54 UTC
28 points
11 comments1 min readLW link

The Main Sources of AI Risk?

Wei_Dai
21 Mar 2019 18:28 UTC
60 points
15 comments2 min readLW link

[Question] What’s wrong with these analo­gies for un­der­stand­ing In­formed Over­sight and IDA?

Wei_Dai
20 Mar 2019 9:11 UTC
37 points
3 comments1 min readLW link

Three ways that “Suffi­ciently op­ti­mized agents ap­pear co­her­ent” can be false

Wei_Dai
5 Mar 2019 21:52 UTC
68 points
2 comments3 min readLW link

[Question] Why didn’t Agoric Com­put­ing be­come pop­u­lar?

Wei_Dai
16 Feb 2019 6:19 UTC
52 points
21 comments2 min readLW link

Some dis­junc­tive rea­sons for ur­gency on AI risk

Wei_Dai
15 Feb 2019 20:43 UTC
36 points
24 commentsLW link

Some Thoughts on Metaphilosophy

Wei_Dai
10 Feb 2019 0:28 UTC
54 points
25 comments4 min readLW link

The Ar­gu­ment from Philo­soph­i­cal Difficulty

Wei_Dai
10 Feb 2019 0:28 UTC
47 points
31 commentsLW link

[Question] Why is so much dis­cus­sion hap­pen­ing in pri­vate Google Docs?

Wei_Dai
12 Jan 2019 2:19 UTC
82 points
21 commentsLW link

Two More De­ci­sion The­ory Prob­lems for Humans

Wei_Dai
4 Jan 2019 9:00 UTC
58 points
12 comments2 min readLW link

Two Ne­glected Prob­lems in Hu­man-AI Safety

Wei_Dai
16 Dec 2018 22:13 UTC
75 points
23 commentsLW link

Three AI Safety Re­lated Ideas

Wei_Dai
13 Dec 2018 21:32 UTC
73 points
38 commentsLW link

Coun­ter­in­tu­itive Com­par­a­tive Advantage

Wei_Dai
28 Nov 2018 20:33 UTC
70 points
6 commentsLW link

A gen­eral model of safety-ori­ented AI development

Wei_Dai
11 Jun 2018 21:00 UTC
70 points
8 commentsLW link

Beyond Astro­nom­i­cal Waste

Wei_Dai
7 Jun 2018 21:04 UTC
92 points
39 commentsLW link

Can cor­rigi­bil­ity be learned safely?

Wei_Dai
1 Apr 2018 23:07 UTC
73 points
110 commentsLW link

Mul­ti­plic­ity of “en­light­en­ment” states and con­tem­pla­tive practices

Wei_Dai
12 Mar 2018 8:15 UTC
93 points
4 commentsLW link

On­line dis­cus­sion is bet­ter than pre-pub­li­ca­tion peer review

Wei_Dai
5 Sep 2017 13:25 UTC
12 points
26 commentsLW link

Ex­am­ples of Su­per­in­tel­li­gence Risk (by Jeff Kauf­man)

Wei_Dai
15 Jul 2017 16:03 UTC
5 points
1 commentLW link
(www.jefftk.com)

Com­bin­ing Pre­dic­tion Tech­nolo­gies to Help Moder­ate Discussions

Wei_Dai
8 Dec 2016 0:19 UTC
13 points
15 commentsLW link