(1) Yes, there are all kinds of good points in Eliezer’s posts that I was not taught in my science coursework or internships and that others are also not taught. Eliezer’s last few posts caused me to raise my (low) estimate of the probability that Eliezer and others can pull off the breakthroughs needed for FAI.
(2) No, the “science” that I and many others were taught in research apprenticeships is not exhausted by the “hypothesis, experiment, conclusion” scientific method that Eliezer has been discussing. It includes plenty of details about what do and do not constitute legitimate research questions and fruitful conjectures within a particular subfield. These details are supplied mainly by example and do not transfer well between different scientific subfields, unlike the general techniques Eliezer is after.
That is: “legitimate science”, as it is often taught, involves sticking to a narrow set of known mechanisms and to hypotheses that sound like previously successful hypotheses. Legitimate science includes “stuff similar to these established examples and nothing else”. It also recommends that an individual only propose hypotheses in subfields where he has been thoroughly steeped in both the formal results and the culture/traditions. This is a good enough notion of science to:
(i) prevent many hypotheses along the lines of Eliezer_18′s,
(ii) to label Penrose’s theories of consciousness unscientific, and
(iii) to label detailed predictions about 2050 unscientific. (Which, indeed, is how many scientists I know regard both Penrose and futurists.)
Unfortunately, this sort of scientific education does not show people how to do revolutionary science, nor does it allow scientists to distinguish between detailed stories about 2050 and simpler statements like “AI stands a good chance of eventually destroying the world by one means or another”. (The latter is branded “unscientific” in the same way the detailed sci-fi stories are branded “unscientific”: both are not made from the toolkit of known examples and mechanisms.)
(3) Like Z. M. Davis and others, I fear rhetorical disaster. Z. M. points out that railing against Mainstream Science is a frequent indicator of crackpottery. I’d like to generalize the principle: people get offended and impute lousy motives when someone talks overmuch about how he alone possesses unique knowledge and powers. Talking about how Bayescraft is completely different from everything else anyone has ever thought/taught, or even sounding like you’re doing so, suggests ego and risks causing offense. Especially if your competitors are at all caricatured or misrepresented.
My impression:
(1) Yes, there are all kinds of good points in Eliezer’s posts that I was not taught in my science coursework or internships and that others are also not taught. Eliezer’s last few posts caused me to raise my (low) estimate of the probability that Eliezer and others can pull off the breakthroughs needed for FAI.
(2) No, the “science” that I and many others were taught in research apprenticeships is not exhausted by the “hypothesis, experiment, conclusion” scientific method that Eliezer has been discussing. It includes plenty of details about what do and do not constitute legitimate research questions and fruitful conjectures within a particular subfield. These details are supplied mainly by example and do not transfer well between different scientific subfields, unlike the general techniques Eliezer is after.
That is: “legitimate science”, as it is often taught, involves sticking to a narrow set of known mechanisms and to hypotheses that sound like previously successful hypotheses. Legitimate science includes “stuff similar to these established examples and nothing else”. It also recommends that an individual only propose hypotheses in subfields where he has been thoroughly steeped in both the formal results and the culture/traditions. This is a good enough notion of science to: (i) prevent many hypotheses along the lines of Eliezer_18′s, (ii) to label Penrose’s theories of consciousness unscientific, and (iii) to label detailed predictions about 2050 unscientific. (Which, indeed, is how many scientists I know regard both Penrose and futurists.) Unfortunately, this sort of scientific education does not show people how to do revolutionary science, nor does it allow scientists to distinguish between detailed stories about 2050 and simpler statements like “AI stands a good chance of eventually destroying the world by one means or another”. (The latter is branded “unscientific” in the same way the detailed sci-fi stories are branded “unscientific”: both are not made from the toolkit of known examples and mechanisms.)
(3) Like Z. M. Davis and others, I fear rhetorical disaster. Z. M. points out that railing against Mainstream Science is a frequent indicator of crackpottery. I’d like to generalize the principle: people get offended and impute lousy motives when someone talks overmuch about how he alone possesses unique knowledge and powers. Talking about how Bayescraft is completely different from everything else anyone has ever thought/taught, or even sounding like you’re doing so, suggests ego and risks causing offense. Especially if your competitors are at all caricatured or misrepresented.