What did you do re: Captain Awkward advice?
casebash
Red Teaming Climate Change Research—Should someone be red-teaming Rationality/EA too?
Yeah, I have a lot of difficulty understanding Lou’s essays as well. Nonetheless, there appear to be enough interesting ideas there that I will probably reread them again at some point. I suspect that attempting to write a summary as I go of the point that he is making might help clarify here.
“’Rationality gives us a better understanding of the world, except when it does not”
I provided this as an exaggerated example of how aiming for absolute truth can mean that you produce an ideology that is hard to explain. More realistically, someone would write something along the lines of, rationality gives us a better understanding of the world, except in cases a), b), c)… but if there are enough of these cases and these cases are complex enough, then in practise people round it off to “X is true, except when it is not”, ie. they don’t really understand what is going on as you’ve pointed out.
The point was that there are advantages of creating a self-conscious ideology that isn’t literally true, but has known flaws, such as it becoming much easier to actually explain so that people don’t end up being confused as above.
In other words, as far as I can tell, it doesn’t seem that your comment isn’t really responding to what I wrote.
Can you add any more detail of what precisely Continental Rationalism is? Or, even better, if you have time it’s probably writing up a post on this.
Self-conscious ideology
Additionally, how come you posted here instead of on the Effective Altruism forum: http://effective-altruism.com/?
If you want casual feedback, probably the best location currently is: https://www.facebook.com/groups/eahangout/.
I definitely think it would be useful, the problem is that building such a platform would probably take significant effort.
There are a huge number of “ideas” startups out there. I would suggest taking a look at them for inspiration.
I think the reason why cousin_it’s comment is upvoted so much is that a lot of people (including me) weren’t really aware of S-risks or how bad they could be. It’s one thing to just make a throwaway line that S-risks could be worse, but it’s another thing entirely to put together a convincing argument.
Similar ideas have been in other articles, but they’ve framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don’t think I saw the links for either of those articles before, but if I had, I probably wouldn’t have read them.
I also think that the title helps as well. S-risks is a catchy name, especially if you already know x-risks. I know that this term has been used before, but it wasn’t used in the title. Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.
I think there’s definitely an important lesson to be drawn here. I wonder how many other articles have gotten close to an important truth, but just failed to hit it out fo the park for some reason or another.
Thanks for writing this post. Actually, one thing that I really liked about CFAR is that they gave a general introduction at the start of the workshop about how to approach personal development. This meant that everyone could approach the following lectures with an appropriate mindset of how they were supposed to be understood. I like how this post uses the same strategy.
Part of the problem at the moment is that the community doesn’t have a clear direction like it did when Elizier was in charge. There was talk about starting an organisation in charge of spreading rationality before, but this never actually seems to have happened. I am optimistic about the new site that is being worked on though. Even though content is king and I don’t know how much any of the new features will help us increase the amount of content, I think that the psychological effect about having a new site will be massive.
I probably don’t have time to be involved in this, but just commenting to note my approval for this project and appreciation for anyone who choses to contribute. One major advantage of this project is that any amount of effort here will provide value—it isn’t like a spaceship that isn’t useful half built.
The fact that an agent has chosen to offer the bet, as opposed to the universe, is important in this scenario. If they are trying to make money off you, then the way to do that is to offer an unbalanced bet on the expectation that you will take the wrong side. So for example, if you think you have inside information, but they know that is actually unreliable.
The problem is that you have to always play when they want, whilst the other person only has to sometimes play.
So I’m not sure if this works.
Partial analysis:
Suppose David is willing to stake 100:1 odds against Trump winning the presidency (before the election). Assume that David is considered to be a perfectly rational agent who can utilise their available information to calculate odds optimally or at least as well as Cameron, so this suggests David has some quite significant information.
Now, Cameron might have his own information that he suspects that David does not and Cameron know that David has no way of knowing that he has this information. Taking this info into account, and the fact that Cameron offered to stake 100:1 odds, he might calculate 80:1 when his information is incorporated. So this would suggest that David should take the bet as the odds are better than Cameron thinks. Except, perhaps David suspected that Cameron had some inside info and actually thinks the true odds are 200:1 - he only offered 100:1 to fool David into thinking it was better that it was—meaning that the bet is actually bad for Cameron despite his inside info.
Hmm… I still can’t get my head around this problem.
Thanks for posting this. I’ve always been skeptical of the idea that you should offer two sided bets, but I never broke it down in detail. Honestly, that is such an obvious counter-example in retrospect.
That said, “must either accept the bet or update their beliefs so the bet becomes unprofitable” does not work. The offering agent has an incentive to only ever offer bets that benefit them since only one side of the bet is available for betting.
I’m not certain (without much more consideration), but it seems that Oscar_Cunningham’s solution of always taking one half of a two sided bet sounds more plausible.
What is Esalen?
What’s Goodhart’s Demon?
The biggest challenge with getting projects done within the Less Wrong community will always be that people have incredibly different ideas of what should be done. Everyone has their own ideas, few people want to join in other people’s ideas. Will definitely be interested to see how things turn out after 3 months.
I like the idea of spreading popularity around when justified, ie. high status people pointing out when someone has a particular set of knowledge that people may not know that they could benefit from or giving them credit for interesting ideas. These seem important for a strong community and additionally provide benefits to the rest of the community by allowing them to take advantage of each other’s skills.
Link doesn’t seem to be working: http://reason.com/blog/2017/07/06/red-teaming-climate-chang1