RSS

Dmytry

Karma: 816

Ra­tion­al­ity of some­times miss­ing the point of the stated ques­tion, and of cer­tain type of defen­sive reasoning

Dmytry29 Dec 2011 13:09 UTC
22 points
18 comments3 min readLW link

New­comb’s prob­lem—one boxer’s in­tro­spec­tion.

Dmytry1 Jan 2012 15:16 UTC
1 point
19 comments1 min readLW link

On ac­cept­ing an ar­gu­ment if you have limited com­pu­ta­tional power.

Dmytry11 Jan 2012 17:07 UTC
32 points
85 comments1 min readLW link

Neu­rolog­i­cal re­al­ity of hu­man thought and de­ci­sion mak­ing; im­pli­ca­tions for ra­tio­nal­ism.

Dmytry22 Jan 2012 14:39 UTC
3 points
43 comments2 min readLW link

Rais­ing aware­ness of ex­is­ten­tial risks—per­haps ex­plain­ing at “per­son­ally stock­ing canned food” level?

Dmytry24 Jan 2012 16:17 UTC
19 points
3 comments2 min readLW link

De­scribe the ways you can hear/​see/​feel your­self think.

Dmytry27 Jan 2012 14:32 UTC
15 points
17 comments4 min readLW link

De­cid­ing what to think about; is it worth­while to have uni­ver­sal util­ity func­tion?

Dmytry1 Feb 2012 9:44 UTC
4 points
2 comments2 min readLW link

3^^^3 holes and <10^(3*10^31) pi­geons (or vice versa)

Dmytry10 Feb 2012 1:25 UTC
16 points
20 comments3 min readLW link

[LINK] Com­puter pro­gram that aces ‘guess next’ in IQ test

Dmytry16 Feb 2012 9:01 UTC
2 points
14 comments1 min readLW link

Brain shrink­age in hu­mans over past ~20 000 years—what did we lose?

Dmytry18 Feb 2012 22:17 UTC
20 points
109 comments2 min readLW link

Self aware­ness—why is it dis­cussed as so profound?

Dmytry22 Feb 2012 13:58 UTC
9 points
21 comments2 min readLW link

Su­per­in­tel­li­gent AGI in a box—a ques­tion.

Dmytry23 Feb 2012 18:48 UTC
16 points
77 comments2 min readLW link

Avoid mak­ing im­plicit as­sump­tions about AI—on ex­am­ple of our uni­verse. [formerly “in­tu­itions about AIs”]

Dmytry27 Feb 2012 10:42 UTC
−6 points
2 comments2 min readLW link

[draft] Gen­er­al­iz­ing from av­er­age: a com­mon fal­lacy?

Dmytry5 Mar 2012 11:22 UTC
6 points
16 comments2 min readLW link

Which drives can sur­vive in­tel­li­gence’s self mod­ifi­ca­tion?

Dmytry6 Mar 2012 17:33 UTC
1 point
55 comments2 min readLW link

Con­junc­tion fal­lacy and prob­a­bil­is­tic risk as­sess­ment.

Dmytry8 Mar 2012 15:07 UTC
26 points
10 comments2 min readLW link

Evolu­tion­ary psy­chol­ogy: evolv­ing three eyed monsters

Dmytry16 Mar 2012 21:28 UTC
20 points
65 comments5 min readLW link

The AI de­sign space near the FAI [draft]

Dmytry18 Mar 2012 10:29 UTC
6 points
49 comments6 min readLW link

Sat­u­rat­ing util­ities as a model

Dmytry19 Mar 2012 21:17 UTC
0 points
2 comments1 min readLW link

Bet­ter to be testably wrong than to gen­er­ate non­testable wrongness

Dmytry20 Mar 2012 19:04 UTC
−7 points
23 comments2 min readLW link