RSS

ChrisHallquist

Karma: 5,365

Tell LessWrong about your char­i­ta­ble donations

ChrisHallquist23 Jan 2012 21:35 UTC
19 points
50 comments2 min readLW link

Two Kinds of Ir­ra­tional­ity and How to Avoid One of Them

ChrisHallquist2 Feb 2012 18:13 UTC
30 points
9 comments4 min readLW link

Par­tial Tran­script of the Han­son-Yud­kowsky June 2011 Debate

ChrisHallquist19 Apr 2012 3:43 UTC
17 points
17 comments9 min readLW link

Why a Hu­man (Or Group of Hu­mans) Might Create UnFriendly AI Halfway On Purpose

ChrisHallquist30 Apr 2012 15:35 UTC
13 points
21 comments5 min readLW link

Work harder on taboo­ing “Friendly AI”

ChrisHallquist20 May 2012 8:51 UTC
27 points
52 comments2 min readLW link

How likely the AI that knows it’s evil? Or: is a hu­man-level un­der­stand­ing of hu­man wants enough?

ChrisHallquist21 May 2012 5:19 UTC
3 points
30 comments3 min readLW link

I think I’ve found the source of what’s been bug­ging me about “Friendly AI”

ChrisHallquist10 Jun 2012 14:06 UTC
15 points
33 comments2 min readLW link

Seek­ing in­for­ma­tion rele­vant to de­cid­ing whether to try to be­come an AI re­searcher and, if so, how.

ChrisHallquist11 Jun 2012 12:23 UTC
17 points
45 comments3 min readLW link

Schol­ar­ship: how to tell good ad­vice from bad ad­vice?

ChrisHallquist29 Jun 2012 2:13 UTC
18 points
34 comments1 min readLW link

Nick Bostrom’s TED talk and set­ting priorities

ChrisHallquist9 Jul 2012 5:01 UTC
4 points
11 comments1 min readLW link

Log­ging progress im­prov­ing con­scien­tious­ness and over­com­ing pro­cras­ti­na­tion at LessWrong

ChrisHallquist19 Jul 2012 4:35 UTC
7 points
17 comments1 min readLW link

Neu­ro­science ba­sics for LessWrongians

ChrisHallquist26 Jul 2012 5:10 UTC
129 points
102 comments13 min readLW link

Ri­gor­ous aca­demic ar­gu­ments on whether AIs can re­place all hu­man work­ers?

ChrisHallquist29 Aug 2012 7:30 UTC
0 points
13 comments1 min readLW link

The ba­sic ar­gu­ment for the fea­si­bil­ity of transhumanism

ChrisHallquist14 Oct 2012 8:04 UTC
9 points
36 comments2 min readLW link

Quote on Nate Silver, and how to think about probabilities

ChrisHallquist2 Nov 2012 4:29 UTC
12 points
24 comments1 min readLW link

What’s your #1 rea­son to care about AI risk?

ChrisHallquist20 Jan 2013 21:52 UTC
2 points
16 comments1 min readLW link

Willing gam­blers, spher­i­cal cows, and AIs

ChrisHallquist8 Apr 2013 21:30 UTC
25 points
40 comments5 min readLW link

Can some­body ex­plain this to me?: The com­putabil­ity of the laws of physics and hypercomputation

ChrisHallquist21 Apr 2013 21:22 UTC
23 points
52 comments1 min readLW link

Could Robots Take All Our Jobs?: A Philo­soph­i­cal Perspective

ChrisHallquist24 May 2013 22:06 UTC
3 points
14 comments18 min readLW link

Who thinks quan­tum com­put­ing will be nec­es­sary for AI?

ChrisHallquist28 May 2013 22:59 UTC
9 points
101 comments1 min readLW link