Here’s the first track from the new release Psychic by Darkside: http://www.youtube.com/watch?v=d8NaWT0WvEE
The entire album feels like lost memories, highly recommended.
Here’s the first track from the new release Psychic by Darkside: http://www.youtube.com/watch?v=d8NaWT0WvEE
The entire album feels like lost memories, highly recommended.
Found a proof of this article at: http://sapir.psych.wisc.edu/papers/lupyan_brainsAlgorithms_proof.pdf
Are there any updates on when this will be released?
I donated some money on Dec. 13, and I’m not sure if the matching was active at that time. Anyone know?
Does anyone have any recommended “didactic fiction”? Here are a couple of examples:
1) Lauren Ipsum (http://www.amazon.com/Lauren-Ipsum-Carlos-Bueno/dp/1461178185) 2) HPMoR
Does anyone have any recommended “didactic fiction”? Here are a couple of examples:
1) Lauren Ipsum (http://www.amazon.com/Lauren-Ipsum-Carlos-Bueno/dp/1461178185) 2) HPMoR
I’ve been doing the “7 min scientific workout” every morning for the past month and I’ve seen great results. http://well.blogs.nytimes.com/2013/05/09/the-scientific-7-minute-workout/
Alum here… glad to hear! You should do that :)
The squats and lunges will exercise back and core. I also add supermans for mid back
I highly recommend the book Concepts, Techniques, and Models of Computer Programming (http://www.amazon.com/Concepts-Techniques-Models-Computer-Programming/dp/0262220695) which is the closest I’ve seen to distilling programming to its essence. It’s language agnostic in the sense that you start with a small “kernel language” and build it up incorporating different concepts as needed.
I’m not sure that he doesn’t have “natural” skill or talent. I find the link now but I remember reading that he’s extremely high IQ. (or something something eidetic memory something something?)
Motifs in his standup comedy routines are about how much smarter he is than everyone else, etc etc (anecdata)
Everyone’s posting evidence for this, which is great and LW is awesome, but I’m also interested in any rebuttals of the sort like “I expected it to hugely change my social life but it didn’t really”
In particular, for me:
I found out about CFAR from LW and attended a CFAR workshop
I’ve attended a couple of meetups in the bay area
I found out about 80000 hours, GiveWell, MIRI, and effective altruism in general, which has been a large force in my life
I’ve met many interesting people working on many interesting things in spheres that I care about
Declaring pseudo-Crocker’s rules...
Not soon after I found out about LW, I expected to e.g move into a rationalist community, immerse myself in the memespace, etc. But there’s a distinct qualitative difference that I feel when I’m hanging out with my friends whom I’ve met from other more prosaic circles (house parties, friends of friends, college, etc) than when I’m hanging out with people at the meetups I’ve been to and even the CFAR workshop. I find it hard to really connect with most people I’ve met through LW in a way that gives me the fuzzywuzzies, even though many of us share similar values and are working towards similar goals.
Yes, my friends are stoners, entrepreneurs, weirdos, normals, hot people, people-probably-more-concerned-social-status-than-LWers, whatever. Some of them know about LW and are familiar with rationality concepts. But I just have a really fun time with them, and I haven’t had that in my experiences so far with LW people. I suspect (at the risk of sounding insulting) that there’s a difference in social acumen and sense of humor or something. I honestly found some of my social experiences with LWers kind of alienating.
Please note I’m not drawing a hard and fast line here, (and obviously there’s a selection effect) but I’m just curious if anyone else has had the same experience.
I would love to see these as posts. (I really enjoyed your posts on the CFAR list about human ethics).
What does “The instrumental lens” hint at?
Do you have anything quick to add about what you mean by “Eliezer-level philosophical ability”?
Before I embark on this seemingly Sisyphean endeavor, has anyone attempted to measure “philosophical progress”? It seems that no philosophical problem I know of is apparently fully solved, and no general methods are known which reliably give true answers to philosophical problems. Despite this we definitely have made progress: e.g. we can chart human progress on the problem of Induction, of which an extremely rough sketch looks like Epicurus --> Occam --> Hume --> Bayes --> Solomonoff, or something. I don’t really know, but there seem to be issues with Solomonoff’s formalization of Induction.
I’m thinking of “philosophy” as something like “pre-mathematics/progressing on confusing questions that no reliable methods exist yet to give truthy answers/forming a concept of something and formalizing it”. Also it’s not clear to me “philosophy” exists independent of the techniques its spawned historically, but there are some problems for which the label of “philosophical problem” seems appropriate, e.g. “how do uncertainties work in a universe where infinite copies of you exist?” and like, all of moral philosophy, etc.
Is there a way to tag a user in a comment such that the user will receive a notification that s/he’s been tagged?
Related—here are some attempts to formalize and understand analogy from a category theoretic perspective:
http://link.springer.com/article/10.1023/A:1018963029743 http://pages.bangor.ac.uk/~mas010/pdffiles/Analogy-and-Comparison.pdf
I have a question about the nature of generalization and abstraction. Human reasoning is commonly split up into two categories: deductive and inductive reasoning. Are all instances of generalization examples of inductive reasoning? If so, does this mean that if you have a deep enough understanding of inductive reasoning, you broadly create “better” abstractions?
For example, generalizing the integers to the rationals satisfies a couple of things: the theoretical need to remove previous restrictions on the operations of subtraction and division, and AFAIK the practical need of representing measurable quantities. This generalization doesn’t seem to fit into the examples given here http://en.wikipedia.org/wiki/Inductive_reasoning at first glance, and I was hoping someone could give me some nuggets of insight about this. Or, can someone point out what the evidence is that leads to this inductive conclusion/generalization?
Narratives and goals: Narrative structure increases goal priming. Laham, Simon M.; Kashima, Yoshihisa http://psycnet.apa.org/journals/zsp/44/5/303/