I’d say Nick Bostrom (a respected professor at Oxford) writing Superintelligence (and otherwise working on the project), this (https://twitter.com/elonmusk/status/495759307346952192), some high profile research associates and workshop attendees (Max Tegmark, John Baez, quite a number of Google engineers), give FAI much more legitimacy than connection theory.
Fhyve
I’m currently interning at MIRI, I had a short technical conversation with Eliezer, a multi hour conversation with Michael Vassar, and other people seem to be taking me as somewhat of an authority on AI topics.
I think we could rewrite Eliezer’s articles. I would disagree with the statement that they are “so good”. The material is great of course, but the way he goes about conveying it is not for everyone. I can’t really see a whole cohesive structure as I am going through and frequently, I am not so sure what point he is making. His use of parable just obfuscates the point for me; his constant referral to his story “The Simple Truth” in Map and Territory really bothered me because that story was difficult for me to get through and I just wanted to see his point in plain text. I still have trouble organizing LW material into an easy-to-think-about structure. What I am looking for is something more resembling a textbook. Very structured, somewhat dry writing (yes, I actually prefer that), maybe some diagrams. I’d do it, but I am not sure I have a strong enough understanding of the material to do so.
Is it just me or has no one in the story really considered that Quirrell = mort? Like, why does the hypothesis that Quirrell = Grindelwald briefly come up first? Why is everyone blindly trusting him even when they think he might be responsible for some of the bad stuff going on? It seems like everyone is doing some serious mental gymnastics to avoid considering that he is actually seriously evil (esp. Hogwarts faculty and Harry).
Does anyone know of a good textbook on public relations (PR), or a good resource/summary of the state of the field? I think it would be interesting to know about this, especially with regards to school clubs, meetups, and online rationality advocacy.
Bayes is epistemological background not a toolbox of algorithms.
I disagree: I think you are lumping two things together that don’t necessarily belong together. There is Bayesian epistemology, which is philosophy, describing in principle how we should reason, and there is Bayesian statistics, something that certain career statisticians use in their day to day work. I’d say that frequentism does fairly poorly as an epistemology, but it seems like it can be pretty useful in statistics if used “right”. It’s nice to have nice principles underlying your statistics, but sometimes ad hoc methods and experience and intuition just work.
How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:
incidentally name drop my local rationalist meetup group, (ie. “I am going to a rationalist’s meetup on Sunday”)
link to lesswrong articles whenever relevant (rarely)
be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)
when asked, motivate rationality by indicating a whole bunch of cognitive biases, and how we don’t naturally have principles of correct reasoning, we just do what intuitively seems right
This is quite passive (other than name dropping and article linking) and mostly requires them to ask me about it first. I want something more proactive that is not straight up linking to Lesswrong, because the first thing they go to is The Simple Truth and immediately get turned off by it (The Simple Truth shouldn’t be the first post in the first sequence that you are recommended to read on Lesswrong). This has happened a number of times.
“I need access to the restricted section, I don’t want another one of my friends to die”
I would suspect that an argument along those lines would be much more likely to succeed if Quirrell hadn’t given his instructions.
I am a young person (20) who is good at math and hasn’t been entrenched in the system yet. I am also already on board with AI risk reduction. I would really like to work as a researcher.
However, I don’t have much to show for myself, and I don’t think I can substantiate my claims right now. I do not know enough about research to know if I am going to be good at it. At the moment, I have a pretty good topical view of math, but not a very good technical view—I am only into second year university math. Pure math and theoretical comp sci especially appeal to me.
How do I find out if I can be a researcher? How do I show you that I can be a good researcher if I find that I can in fact become a good researcher? What sort of math should I be studying—any textbooks to recommend?
This is more to address the common thought process “this person disagrees with me, therefore they are an idiot!”
Even if they aren’t very smart, it is better to frame them as someone who isn’t very smart rather than a directly derogatory term “idiot.”
My meals are not at all regular and very difficult to measure—not low hanging fruit for everyone.
Burning cats is another good example. Can you feel how much fun it is to burn cats? Some people used to have all sorts of fun by burning cats. And this one is harder to do the wrong sort of justification based on bad models than either burning witches or torturing heretics.
Edit: Well, just scrolled down to where you talk about torturing animals. Beat me to it I guess...
The most charitable take on it that I can form is a similar one to Scott’s on MBTI: (http://slatestarcodex.com/2014/05/27/on-types-of-typologies/). It might not be validated by science, but it provides a description language with a high amount of granularity over something that most people don’t have a good description language for. So with this interpretation, it is more of a theory in the social sciences sense, a lens at which to look at human motivation, behaviour, etc. This probably differs from, and is a much weaker claim than people at Leverage would make.
I don’t know how I feel about the allegations at the end. It seems that other than connection theory, Leverage is doing good work, and having more money is generally better. I would neither endorse or criticize their use of it, but I think that since I don’t want those tactics used by arbitrary people, I’d fall on the side of criticize. I would also recommend that the aforementioned creator not be so open about his ulterior motives and some other things he has mentioned in the past. All in all, Connection Theory is not what Leverage is selling it as.
Edit: I just commented on the theory side of it. The therapy side (or however they are framing the actual actions side), a therapy doesn’t need its underlying theory to be correct in order to be effective. I am rather confident that actually doing the connection theory exercises will be fairly beneficial, though actually doing a lot of things coming from psychology will probably be fairly beneficial. And other than the hole in your wallet, talking to the aforementioned creator probably is too.
In transparent box Newcomb’s problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?
I want to improve my exposition and writing skills, but whenever I think “what do I know that I can explain to people that isn’t explained well elsewhere?” not much comes to mind. I think that happens because it is hard to just do a search of everything that I know. The main topics that I know are math and rationality (mostly LW epistemic rationality, but also a little instrumental and LW moral philosophy). So I ask:
What is a topic in math or rationality that you wish were explained better or explained at a different level (casual, technical, etc.) than what already exists? Like, something that you know now but wish had been explained to you better, something that you don’t know but wish you did, or something that you wish you could explain to other people but don’t know of any sources to send them to.
Since reading Lesswrong, I try to argue more. But not to win, rather, from the point of view that I am trying to understand how they think and to modify how they think. Lesswrong has allowed me to concede points that I don’t agree with, because I know that I can’t change their mind yet. It’s fun.
He doesn’t need to stall for time to transfigure. He could have already been doing it over the last two chapters.
If you want a more precise date for whatever reason, it was right at the end of the July 2013 workshop, which was July 19-23. There were a number of leverage folk who had just started the experiment there.
In that link, is that the 3 dimensional analog of living on a 2D plane with a hole in it, and when you enter the hole, you flip to the other side of the plane? (Or, take a torus, cut along the circle farthest from the center, and extend the new edges out to infinity?)
“How do you not have arguments with idiots? Don’t frame the people you argue with as idiots!”
-- Cat Lavigne at the July 2013 CFAR workshop