Do you have any examples in mind? It seems to me that only a misunderstanding of natural selection could explain fake animal behavior.
nhamann
The point in you reading the articles is that the inferential distance between you and the rest of the members of this community is so large that communication becomes unwieldy. Like it or not, the members of Less Wrong (like the members of most communities which engage in specialized discourse) chunk specific, technical concepts into single words. When you do not understand the precise meaning of the words as they are being used, there is a disconnect between you and the members of the community.
The specific problem here is in the use of the word “evidence.” By evidence, we mean (roughly) “any observation which updates the probability of a hypothesis being true.” By probability, we mean Bayesian probability. I’m not going to go through the probability calculation, but other commenters are correct: given the evidence that you are not really, really old, you should revise the probability assigned to your hypothesis of immortality down significantly.
If you are not going to do the requisite reading that would enable you to participate in this discussion community, it would probably be best for both you and everyone here if you just left now. If you do feel like participating, I highly recommend going through the sequences.
I’m currently trying to teach myself mathematics from the ground up, so I’m in a similar situation as you. The biggest issue, as I see it, is attempting to forget everything I already “know” about math. Math curriculum at both the public high school and the state university I attended was generally bad; the focus was more on memorizing formulas and methods of solving prototypical problems than on honing one’s deductive reasoning skills, which if I’m not mistaken is the core of math as a field of inquiry.
So obviously textbooks are good place to start, but which ones don’t suck? Well, I can’t help you there, as I’m trying to figure this out myself, but I use a combination of recommendations from this page and looking at ratings on Amazon.
Here are the books I am currently reading, have read portions of, or are on my immediate to-read list, but take this with a huge grain of salt as I’m not a mathematician, only an aspiring student:
How to Prove It: A Structured Approach by Vellemen—Elementary proof strategies, is a good reference if you find yourself routinely unable to follow proofs
How to Solve It by Polya—Haven’t read it yet but it’s supposedly quite good.
Mathematics and Plausible Reasoning, Vol. I & II by Polya—Ditto.
Topics in Algebra by Herstein—I’m not very far into this, but it’s fairly cogent so far
Linear Algebra Done Right by Axler—Intuitive, determinant-free approach to linear algebra
Linear Algebra by Shilov—Rigorous, determinant-based approach to linear algebra. Virtually the opposite of Axler’s book, so I figure between these two books I’ll have a fairly good understanding once I finish.
Calculus by Spivak—Widely lauded. I’m only 6 chapters in, but I immensely enjoy this book so far. I took three semesters of calculus in college, but I didn’t intuitively understand the definition of a limit until I read this book.
Most uses of the word “insight” mean something similar to “seeing into the nature of things,” but it’s not clear that the particular use you have here meshes well with at least one other common use of the word. Eliezer captured it well:
an “insight” is a chunk of knowledge which, if you possess it, decreases the cost of solving a whole range of governed problems.
As a simple example, let’s say you were trying to prove the statement “there are infinitely many primes.” To progress on this problem at all, you’ll probably need to realize:
Insight 1 - The statement “there are infinitely many primes” can be re-expressed as “it is not the case that there are finitely many primes.”
Insight 2 - A statement of the form “not P” can sometimes be proven by assuming “P” and showing that this assumption leads to contradiction.
After assuming there are finitely many primes (i.e. there exists an n such that P = {p1, p2, …, pn} is the set of all primes), insight again comes into play when one realizes:
Insight 3 - Every integer > 1 can be expressed as a product of primes, so we can find a prime not in P (i.e. a contradiction) by finding an integer that is not divisible by any prime in P.
In this latter case, the insight consisted in using the fundamental theorem of arithmetic to transform the previous goal of “deriving a contradiction” to a more specific goal of “finding an integer that is not divisible by any prime in P.”
I realize that the context of problem solving is somewhat removed from the context of assessing the probability of hypotheses, but perhaps we should clarify what particular usage of the word “insight” is meant if we’re going to be analyzing it in detail.
This is sort of off-topic for LW, but I recently came across a paper that discusses Reconfigurable Asynchronous Logic Automata, which appears to be a new model of computation inspired by physics. The paper claims that this model yields linear-time algorithms for both sorting and matrix multiplication, which seems fairly significant to me.
Unfortunately the paper is rather short, and I haven’t been able to find much more information about it, but I did find this Google Tech Talks video in which Neil Gershenfeld discusses some motivations behind RALA.
As mentioned in another comment, the best introduction to programming is probably SICP. I recommend going with this route, as trying to learn programming from language-specific tutorials will almost certainly not give you an adequate understanding of fundamental programming concepts.
After that, you will probably want to start dabbling in a variety of programming styles. You could perhaps learn some C for imperative programming, Java for object-oriented, Python for a high-level hybrid approach, and Haskell for functional programming as starters. If you desire more programming knowledge you can branch out from there, but this seems to be a good start.
Just keep in mind that when starting out learning programming, it’s probably more important to dabble in as many different languages as you can. Doing this successfully will enable you to quickly learn any language you may need to know. I admit I may be biased in this assessment, though, as I tend to get bored focusing on any one topic for long periods of time.
For what it’s worth, this is exactly the Buddhist principle of “no-self”, which, by my understanding, is a specific case of sunyata, the Mahayana Buddhist doctrine of “emptiness.” Interestingly, sunyata seems to be equivalent or very similar to the notion of the Mind Projection Fallacy:
“According the Madhyamaka, or Middle Way philosophy which is central to Mahayana Buddhism, ordinary beings misperceive all objects of perception in a fundamental way. The misperception is caused by the psychological tendency to grasp at all objects of perception as if they really existed as independent entities. This is to say that ordinary beings believe that such objects exist ‘out there’ as they appear to perception...
Sunyata—translated as Emptiness—is the concept that all objects are Empty of svabhava, they are Empty of ‘inherent existence’.”
I suppose you’re right in saying that LW isn’t supposed to be a forum, but the fact remains that there is a growing trend towards more casual/off-topic/non-rationalism discussion, which seems perfectly fine to me given that we are a community of generally like-minded people. I suspect that it would be preferable to many if LW had better accommodations for these sort of interactions, perhaps something separate from main site so we could cleanly distinguish serious rationalism discussion from off-topic discussion.
Most of the time I’ve run into the word “obviously” is in the middle of a proof in some textbook, and my understanding of the word in that context is that it means “the justification of this claim is trivial to see, and spelling it out would be too tedious/would disrupt the flow of the proof.”
For what it’s worth, you can get absurdly cheap hosting through Amazon’s Simple Storage Service. We’re talking pennies per month.
Is this implying a “yes” to the Tree Falls in the Forest question? Has that question been decided, and if not, does this theory not fall apart without it?
First off, welcome to Less Wrong! You should take a look at some of the sequences, as this exact question has been addressed (see: Disputing Definitions). To be brief, the consensus around here is that the “Tree Falls in the Forest” question is a wrong question, and should be dissolved.
Nitpicking, but this “doing” question can’t possibly be of equal importance to the fundamental question of rationality, because answering the “Why are you doing it?” part obviously depends on you having come to terms with what you believe, and why you believe it.
That said, I think this “doing” question is fundamental as well, second in importance only to the Fundamental Question. Good post.
The “Why” in “why are you doing it” could be interpreted as “for what purpose,” or “as a result of what causal chain.”
Certainly. What’s important, however, is that the process of repeating the “why?” question forces you to to 1) think about what it is that you’re doing, in detail, 2) understand what ends these actions serve, and 3) confront the beliefs that make these ends seem desirable in the first place. In effect, asking “what are you doing, and why are you doing it?” forces you to look at not only what you believe, but whether or not your actions are in alignment with your beliefs. For example:
What are you doing?
I am studying information theory, Bayesian statistics and neuroscience.
and why are you doing that?
I am trying to understand how the brain works, and it appears that the former two areas of mathematics are useful tools in formulating theories about the brain. (Notice I’ve already had to confront a belief here. A “why do you believe this?” question should go here.)
and why are you doing that?
1) It is a very interesting problem. 2) Ultimately, having a good theory of the brain will likely contribute to both AI and WBE technologies (belief!), both of which I view as necessary to confront issues that will likely arise in the world as the world population increases and as more and more dangerous technologies get developed (synthetic biology, nanotechnology, etc.) (a tangled network of beliefs, here, all of which need explaining).
If I were to unravel this further, I would have to confront the fact that AI is itself a dangerous technology, so I should address whether my current course of action results in a net positive or net negative impact on the chances of a beneficial Singularity (there’s a belief implicit here: that the Singularity is plausible enough to warrant thinking about. This too requires explanation).
Of course, this process quickly gets messy, but in my view “what are you doing and why are you doing it” is of fundamental importance to any rationalist.
Does anyone know the relative merits of folding@home and rosetta@home, which I currently run? I don’t understand enough of the science involved to compare them, yet I would like to contribute to the project which is likely to be more important. I found this page, which explains the differences between the projects (and has some information about other distributed computing projects), but I’m still not sure what to think about which project I should prefer to run.
Whenever I’m reading things that I want to actually learn and retain, I read with pencil and notebook and write down all the important points in my own words. I’ve found this to be helpful because it forces me to slow down and think about what I’m reading and how each new piece of information relates to everything that came before it. I’ve also found that having pencil and paper close at hand encourages picture drawing, which is often helpful when learning something (though it depends on what you’re reading).
But that is an absurd task, because if you don’t understand algebra, you certainly won’t be discovering differentiation. Attempting to “discover differential equations before anyone else has discovered algebra” doesn’t mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE’s.
It seems that a more reasonable approach would be a) work towards algebra while simultaneously b) researching and publicizing the potential dangers of unrestrained algebra use (Oops, the metaphor broke.)
Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI? That the OP shouldn’t even work on AI at all, and should dedicate his efforts to advocating friendly AI discussion and research instead? If a major current barrier to FAI is understanding how intelligence even works to begin with, then this preliminary work (if it is useful) is going to be a necessary component to both regular AGI and FAI. Is the only problem you see, then, that it’s going to be made publicly available? Perhaps we should establish private section of LW for Top Secret AI discussion?
I apologize for being snarky, but I can’t help but find it absurd that we should be worrying about the effects of LW articles on unfriendly singularity, especially given that the hard takeoff model, to my knowledge, is still rather fuzzy. (Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)
To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.
It’s been brought up in multiple comments already, but I also wanted to register my disapproval of this statement. The first four minutes of the first SICP video lecture has the best description of computer science that I’ve ever heard, so I quote:
“The reason that we think computer science is about computers is pretty much the same reason that the Egyptians thought geometry was about surveying instruments, and that is when some field is just getting started and you don’t really understand it very well, it’s very easy to confuse the essence of what you’re doing with the tools that you use...I think in the future, people will look back and say, “well yes, those primitives in the 20th century were fiddling around with these gadgets called ‘computers,’ but really what they were doing was starting to learn how to formalize intuitions about process: how to do things; starting to develop a way to talk precisely about ‘how-to’ knowledge, as opposed to geometry that talks about ‘what is true.’”—Hal Abelson
That said, I’m looking forward to your upcoming posts.
I have a (probably stupid) question. I have been following Less Wrong for a little over a month, and I’ve learned a great deal about rationality in the meantime. My main interest, however, is not rationality, it is in creating FAI. I see that the SIAI has an outline of a research program, described here: http://www.singinst.org/research/researchareas.
Is there an online community that is dedicated solely to discussing friendly AI research topics? If not, is the creation of one being planned? If not, why not? I realize that the purpose of these SIAI fellowships is to foster such research, but I’d imagine that a discussion community focused on relevant topics in evolutionary psych, cogsci, math, CS, etc. would provide a great deal more stimulation for FAI research than would the likely limited number of fellowships available.
A second benefit would be that it would provide a support group to people (like me) who want to do FAI research but who do not know enough about cogsci, math, CS, etc. to be of much use to SIAI at the moment. I have started combing through SIAI’s reading list, which has been invaluable in narrowing down what I need to be reading, but at the end of the day, it’s only a reading list. What would be ideal is an active community full of bright and similarly-motivated people who could help to clarify misconceptions and point out novel connections in the material.
I apologize if this comment is off-topic.