I was born in 1962 (so I’m in my 60s). I was raised rationalist, more or less, before we had a name for it. I went to MIT, and have a bachelors degree in philosophy and linguistics, and a masters degree in electrical engineering and computer science. I got married in 1991, and have two kids. I live in the Boston area. I’ve worked as various kinds of engineer: electronics, computer architecture, optics, robotics, software.
Around 1992, I was delighted to discover the Extropians. I’ve enjoyed being in that kind of circles since then. My experience with the Less Wrong community has been “I was just standing here, and a bunch of people gathered, and now I’m in the middle of a crowd.” A very delightful and wonderful crowd, just to be clear.
I‘m signed up for cryonics. I think it has a 5% chance of working, which is either very small or very large, depending on how you think about it.
I may or may not have qualia, depending on your definition. I think that philosophical zombies are possible, and I am one. This is a very unimportant fact about me, but seems to incite a lot of conversation with people who care.
I am reflectively consistent, in the sense that I can examine my behavior and desires, and understand what gives rise to them, and there are no contradictions I‘m aware of. I’ve been that way since about 2015. It took decades of work and I’m not sure if that work was worth it.
Epistemic status: I didn’t read the paper but I read the blog post.
In 1976, the essay “Artificial Intelligence meets Natural Stupidity” pointed out a failure mode into which AI researchers can fall. I fear this is another example, 50 years later. It goes as follows:
I invent a new thing built out of abstractions (mathematics, software).
I call it “X”, which is an already existing phenomenon in human minds. The name is a common word understood by anybody.
I do many experiments on “X” in my system and learn about it.
I publish a paper, asserting important new facts about X in general. Honors, accolades, etc.
Of course there is no necessary connection between the new phenomenon “X” and the existing X in ordinary language. For this to be good research, you need to show that the two Xes are similar in all important respects.
In this case, X is “incoherence”. They define incoherence to be the fraction of error explained by variance. This has little or no connection to the property of being an actually incoherent reasoner, or to the effectiveness of superhuman AI.
I hope this doesn’t result in redefining the meaning of “incoherence” in the wider field.