kpreid
Excellent diagrams-next-to-equations explanation of Bayes’ Theorem (except that I think diagrams with rectangles would be more visually accurate).
I took the survey. I will now take this opportunity to criticize it.
Severely missing items:
“Moral Views” should have had links to definitions; especially of “Eliezer’s interpretation”.
“Charity” should have had a “No opinion/I have no information on the matter” choice.
“Cryonics” should have had a “I have not yet considered the matter / No opinion” choice.
Minor missing items:
“Political Views” should have been split into questions on the particular attributes listed, for those of us who don’t bother to figure out which label applies to us.
“Time in Community” should have listed how many months ago LW and OB were created, for convenience.
Clarify your taxonomy:
If our universe is a simulation, does the containing universe have any significance to the Supernatural and God questions?
If our universe is a simulation, not of some other universe, but constructed from scratch, is the designer of it God in the sense of “Probability: God”? Does it matter whether this entity has edited the state of the simulation since the beginning of its time?
If our universe is a simulation which was designed to include ontologically basic mental entities, even though the universe in which it is simulated does not, does that count towards “Probability: Supernatural”?
I understood ciphergoth’s description as “what we have been discussing being useful for nothing more than these tasks”, not a world where those tasks are all you need to deal with.
I was going to say that any stateful controller has a model — the state constitutes the model — but reading this comment made me realize that that would be arguing about whether the tree makes a sound.
The substitution is not equivalent; people are more likely to agree whether something contains “random access memory” than whether it contains “a model”.
I’d just like to say as an occasional Lojbanist that the LW user “Lojban” does not speak for us.
0% confidence should mean zero weight when computing the results, no?
Meta: Given the rate at which new comments appear, I wish the comments feed http://lesswrong.com/comments/.rss contained more than 20 entries; say closer to 200. Also, all of the feeds I’ve looked at (front page, all new, comments) have the identical title “lesswrong: What’s new”, which is useless for distinguishing them.
and I induce that the probability that “rationality” is a meaningful (self-consistent, compete) theory is tiny.
A theory of what?
You seem to be expecting “rationality” to replace “God” in some slot; perhaps “theory of everything”. But this seems to me a category error, rationality being an activity, not an explanation.
Perhaps “God” is not the missing element required to make “rationality” consistent and complete—however, anything that I can think of adding that might fix the theory could be eliminated by exactly the same arguments that you use to eliminate belief in God. (For example: Truth. Love. Quality. etc.)
Truth, love, and quality are directly observable. Though I don’t see what you hope to do with them. I suspect that the missing element you see is actually an unnecessary element.
Could you explain what this missing element is missing from, and what it should supply?
What evidence I know of indicates that brains do have functions somewhat distributed among physical parts—a fixed set of parallelly-operating modules. Now, this doesn’t mean that the modules will be neatly divided with well-defined interfaces like we would consider good practice, but this structure means that the way it operates is more like concurrent/distributed object-oriented programming than like function calls and large imperative procedures. So while I would reject “functions”, “layers of modules” is probably more useful than “spaghetti code” for thinking about the overall behavior of our evolved minds.
(Evolution has produced more obvious modules, too; we call them organs. I also suspect that in general “modularity” is very useful for evolvability: if you have modules then it’s more likely that a random change (which affects some modules but not the whole organism) won’t produce something completely broken.)
(Disclaimer: I am not a biologist, and I am a programmer working with distributed object-oriented systems, so I may be just doing that thing of applying the metaphors I particularly think in.)
Do you intend to claim the US is representative of the rest of the world?
I found it difficult to determine whether you were being sarcastic. I think the most reads-as-sarcastic part is the structure of “[In the future,] I’ll [subordinate myself to you]; clearly [I am incompetent].”—and the overall tone is rather gushingly-positive-about-criticism which is a common mode of sarcasm, i.e. “Oh, now that I’ve been told I’m wrong I will, of course, immediately switch over to your view of things.”
I found more value in “maybe I need to set up a blog of things I have read that I think are true” than in the extremely broad topic of “harness your biases”. If I were editing the article I would throw out that topic and keep the particular notion of improving your knowledge by preparing it for publication.
Clarification: My quotation from your article was not intended to be a suggestion of a title.
Perhaps it is possible that your parent(s) “doesn’t understand you” but still internally expects to, and so does worse than someone who doesn’t know you or knows you from recent experience.
FYI, Less Wrong accepts Markdown syntax in comments.
What are the useful consequences of this theory?
It does not seem to me to “solve” the given “hard problems”; rather it declares them unsolvable, along with most everything else which we think we’ve solved.
I’m just rereading it due to your mention, and I found this passage at the point where Leo Graf is beginning to realize What Needs To Be Done:
[...] “How the hell should I know? At that point, it becomes Orient IV’s problem. There’s only so much one human being can do, Leo.”
Leo smiled slowly, in grim numbness. “I’m not sure . . . what one human being can do. I’ve never pushed myself to the limit. I thought I had, but I realize now I hadn’t. My self-tests were always carefully non-destructive.”
This test was a higher order of magnitude altogether. This Tester, perhaps, scorned the merely humanly possible. Leo tried to remember how long it had been since he’d prayed, or even believed. Never, he decided, like this. He’d never needed like this before. . . .
Ignoring the religious content, for me-now this seems to be another occurrence of the idea that the universe is not adjusted to your skill level, and Graf is realizing he needs (to satisfy his morality) to do the impossible.
“Apply deodorant before going to bed” lacks information. If I hadn’t seen the previous discussion, I would assume the point was “Do apply deodorant”, not ”...rather than in the morning”.
‘akrasia’ is a behavior and ‘lack of motivation’ is a hypothesized cause.