It’s something people say, but don’t necessarily fully believe
I appreciated this post for explaining Berkeley’s beliefs really clearly to me. I never knew what he was going on about before.
Would be happy to try this
POSTED ON WRONG ACCOUNT
In a booming market, buying can be valuable as a hedge against rising house prices
Yeah, I meant part 7. What did he say about feminism and neoreaction?
I’d like to know more about the dark sides part of the book
I’d still like the ability to make the explicit abstract just read off the text after a certain point, but I suppose it would require a lot of work to support that functionality.
I agree fairly strongly, but this seems far from the final word on the subject, to me.
Hmm, actually I think you’re right and that it may be more complex than this.
Ah. I take you to be saying that the quality of the clever arguer’s argument can be high variance, since there is a good deal of chance in the quality of evidence cherry-picking is able to find. A good point.
Exactly. There may only be a weak correlation between evidence and truth. And maybe you can do something with it or maybe it’s better to focus on stronger signals instead.
I view the issue of intellectual modesty much like the issue of anthropics. The only people who matter are those whose decisions are subjunctively linked to yours (it only starts getting complicated when you start asking whether you should be intellectually modest about your reasoning about intellectual modesty)
One issue with the clever arguer is that the persuasiveness of their arguments might have very little to do with how persuasive they should be, so attempting to work off expectations might fail.
Where would you start with his work?
I’ve heard of it, but I haven’t read into it, so I avoided using the term
Maybe there is a possible project in this direction. I’ll assume that this is general advice you’d give to many people who want to work in this space. If it is important for people to build a model of what is required for AI to go well then people may as well work on this together. And sure there’s websites like Less Wrong, but people can exchange information much faster by chatting either in person or over Skype. (Of course there are worries that this might lead to overly correlated answers)
On a related, but somewhat different issue: I feel that there has been something of an under-investment in rationality community building overall. EA has CEA, but rationality doesn’t have an equivalent (CFAR doesn’t play the same community building role). There isn’t any organisation responsible for growing the community, organising conferences and addressing challenges that arrive.
That said, I’m not sure that there is necessarily agreement that there is a single mission. Some people are in rationality for ai, some insight porn, some for the personal development and some simply for social reasons. Even though EA has a massively broad goal, doing the most good seems to suffice to spur action in a way that rationality hasn’t.
This doesn’t seem to accurately describe contemporary politics, at least in the Western world. The left wing isn’t just non-central groups, but the cultural/intellectual elites.
Probably wrong editor
There’s other considerations that slow large code bases:
The more features you have, the more potential interactions
The bigger a codebase is, the harder it is to understand it
Having more features means more work is involved in testing
Customer bases shift over time from early adopters to those who want more stability and reliability
When a code base is more mature, there’s more chance that a change could make the product worse, so you have to spend more time on evaluation
A larger customer base forces you to care more about rare issues