Done. I hope this data help LW/CFAR.
Bruno_Coelho
Students are often quite capable of applying economic analysis to emotionally neutral products such as apples or video games, but then fail to apply the same reasoning to emotionally charged goods to which similar analyses would seem to apply. I make a special effort to introduce concepts with the neutral examples, but then to challenge students to ask wonder why emotionally charged goods should be treated differently.
-- R. Hanson
The SIAI is planning to publish more on academic journals?
The most astonishingly incredible coincidence imaginable would be the complete absence of all coincidences.
-- John Allen Paulos(from Beyond Numeracy)
How to signal intelligence, hanging around with professors and doing fancy experiments.
A good net make you more propense to suceed if you already are smart. The academic plataform is inneficient, don’t expect to learn the best insights from there.
‘Conference’ sounds like a bunch of specialist in one place to discuss some stuff.
‘Course’ indicates training, with I suppose, is the goal.
Whats is the main difference of the traditional NGOs?
I’ve seen this movement and assume these people have some different method to overcome problems like poverty, diseade, altruistic-wharever.
It’s all about donating? Or are some plans have direct actions, activism?
The big problems facing science are management problems. We don’t know how to identify important areas of study, or >people who can do good science, or good and important results.
Bostrom “Predictions from philosophy” makes similar advice, but not specific to scientists. In both cases the solution is focus cognitive resources on strategic analysis, I suppose. However, is really dificult to implement this on a large scale without hurting egos.
CT was not supposed to be connected with actual cognitive science? Objection #3 seems conclusive. However, I’m curious to know what’s the initial motivation to create something original.
I see some skeptics of the singularity, and analyse the arguments, but there is something I cannot deny: lukeprog( and others) are really trying to solve FAI. Even if in the near future we begin to realize and encounter some evidence in favor of another risk, the compreension of fragility lead us to modify our priorities.
This anti-academic feeling is something I associate with lesswrong, mostly because people can find programming jobs without necessarily having a degree.
Both positive and negative black swans . Aditionally: randomness and regression to the mean.
This is academic habit, but vulnerable to group bias. Normally, you don’t send drafts to experts who strong disagree with your statements, but close friends who wants to read what you write.
Drugs in general are used mostly in social contexts, with internal deliberation about IQ loss and near rewards. If you are aware, normally the goal change occur when the incentives come up to ask: how many IQ points do you sacrifice to conquer this friends/girl/promotion/.
In some cases, be nice with someone makes them to maintain epistemic states their future self probably not want to be in. In this case, every time you shut up about a err or wrong belief, you lose.
It’s not mutually exclusive being nice and pursuing truth. but If part of LWers downvoted Todd post, is for some reason. Even if the altruistic movement are based in good data.
Making clear, I support 80k, but like any other movement who tried to save the world; takes time to visualize the benefit.
With close friends this works, saying “I believe X” signals uncertains where someone could help with avaliable information. But in public debates if you say “I believe X” instead of “X”, people will find more confidente and secure.
You put two words in one and I’m still confused.
Prize winners don’t gain only the money, but the status too, and this future gain affects all the way down. In some fields like mathematics, a prize is synonymous of real advance, with is not so clear in others, like philosophy.
Somehow, LW/MIRI can’t disentangle research and weirdness. Vassar is one of the guys when make public interviews end up giving this impression.
Hi everybody,
I’m male, 24, philosophy student and live in Amazon, Brazil. I came across to LessWrong on the zombies sequence, because in the beginning, one of my intelectual interests was analytic philosophy. I saw that reductionism and rationality have the power to respond various questions, righting them to something factually tractable. My goals here is to contribute to the community in a useful form, learn as much as possible, become stronger and save the world reducing the risks of human extintion. I’m looking for some advice in these topics: bayesian epistemology, moral uncertain and the complexity of the wishes. If some of the participants in the forum can help me, I will be very grateful.