I disagree with Eliezer on the possibility of Oracular AI (he thinks it’s impossible).
Other moderately iconoclastic statements:
The computer is a terrible metaphor for the brain.
In the ultimate theory of AI, logic and deduction will be almost irrelevant. AI will use large scale induction, statistics, and memorization.
In order to achieve AI, it is just as important to study the real world as it is to study algorithms. To succeed AI must become an empirical science.
AI is a pre-paradigm discipline.
Rodney Brooks is a great philosopher of AI (I have no comment regarding his technical contributions).
Large scale brain simulation will not succeed.
Evolutionary psychology, while interesting from the perspective of explaining human behavior, is irrelevant for AI.
Computer science, with its emphasis on logic, deduction, formal proof, and technical issues, is nearly the worst possible type of background from which to approach AI.
Strikes me as an idea worth considering. If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could take OB and LW discussion as prerequisite material that commenters could be expected to have read, but not vice versa.
But then, in the still-censored site, we still wouldn’t be able to mention AGI/singularity in a response, even if it would be highly relevant.
A possible solution could be to have click-setable topic flags on posts and comments when bringing up topics that...
Are worth discussing
Are likely to be, fairly frequently
Lots of people would really rather they weren’t
...and readers can switch topics off in Options, boosting signal/noise ratio for the uninterested while allowing the interested to discuss freely. Comments would inherit parent’s flags by default.
My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.
If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could...
My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.
I thought that was what was planned already (after May). I was responding to AnnaSalamon:
If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could...
I took that to mean keeping LW separate from AGI/singularity discussion, or why say ‘even after May’? Someone please explain if I misunderstood as I’m now most confused!
I think Anna wants to use the LW codebase to create a group blog to examine AGI/Singularity/FAI issues of concern to SIAI, even if they are not directly rationality-related. I think that’s a good plan for SIAI.
Thank you for stating your disagreement, but topics like these aren’t supposed to be discussed until May. This thread should go no further, because people could list AI “disagreements” all day and really not come any closer to the spirit of the original post.
I reread the “About page” and it currently contains:
“To prevent topic drift while this community blog is being established, please avoid mention of the following topics on Less Wrong until the end of April 2009:
The Singularity
Artificial General Intelligence”
I disagree with Eliezer on the possibility of Oracular AI (he thinks it’s impossible).
Other moderately iconoclastic statements:
The computer is a terrible metaphor for the brain.
In the ultimate theory of AI, logic and deduction will be almost irrelevant. AI will use large scale induction, statistics, and memorization.
In order to achieve AI, it is just as important to study the real world as it is to study algorithms. To succeed AI must become an empirical science.
AI is a pre-paradigm discipline.
Rodney Brooks is a great philosopher of AI (I have no comment regarding his technical contributions).
Large scale brain simulation will not succeed.
Evolutionary psychology, while interesting from the perspective of explaining human behavior, is irrelevant for AI.
Computer science, with its emphasis on logic, deduction, formal proof, and technical issues, is nearly the worst possible type of background from which to approach AI.
I think it’s more that he doesn’t think it’s a good solution to Friendliness.
I think it would be a good idea to create a sister website on the same codebase as LW specifically for discussing this topic.
Strikes me as an idea worth considering. If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could take OB and LW discussion as prerequisite material that commenters could be expected to have read, but not vice versa.
I endorse this proposal.
But then, in the still-censored site, we still wouldn’t be able to mention AGI/singularity in a response, even if it would be highly relevant.
A possible solution could be to have click-setable topic flags on posts and comments when bringing up topics that...
Are worth discussing
Are likely to be, fairly frequently
Lots of people would really rather they weren’t
...and readers can switch topics off in Options, boosting signal/noise ratio for the uninterested while allowing the interested to discuss freely. Comments would inherit parent’s flags by default.
Possible flaggable topics:
Friendly AI/Singularitarianism
Libertarian politics
Simulism
Meta-discussion about possible LW changes
Another idea, more generally applicable: the ability to reroot comment threads under a different post, leaving a link to the new location.
My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.
I was responding to AnnaSalamon:
I thought the same.
I thought that was what was planned already (after May). I was responding to AnnaSalamon:
I took that to mean keeping LW separate from AGI/singularity discussion, or why say ‘even after May’? Someone please explain if I misunderstood as I’m now most confused!
I think Anna wants to use the LW codebase to create a group blog to examine AGI/Singularity/FAI issues of concern to SIAI, even if they are not directly rationality-related. I think that’s a good plan for SIAI.
Does the ban apply to Newcomb-like problems with simplifying Omegas?
Daniel, why do you consider these things crazy enough to qualify for the poll? I think many of them are quite reasonable and defendable.
Thank you for stating your disagreement, but topics like these aren’t supposed to be discussed until May. This thread should go no further, because people could list AI “disagreements” all day and really not come any closer to the spirit of the original post.
There a “LessWrong” schedule?!?
I think that in this case, Eliezer specifically requested that everyone refrain from posting on AI after his AI-related Overcoming Bias posting spree.
I reread the “About page” and it currently contains:
“To prevent topic drift while this community blog is being established, please avoid mention of the following topics on Less Wrong until the end of April 2009: The Singularity Artificial General Intelligence”
Forbidden topics!