I have donated $1000, and I really do believe that our community can get her fully funded. I understand how CI has to be cautious about these sorts of things, but I’ve seen enough evidence to be more than convinced.
mtaran
There are a lot of things I’d like to say, but you have put forth a prediction
It’s probably a scam
I would like to take up a bet with you on this ending up being a scam. This can be arbitrated by some prominent member of CI, Alcor, or Rudi Hoffman. I would win if an arbiter decides that the person who posted on Reddit was in fact diagnosed with cancer essentially as stated in her Reddit posts, and is in fact gathering money for a her own cryonics arrangements. If none of the proposed arbiters can vouch for the above within one month (through September 18), then you will win the bet.
What odds would you like on this, and what’s the maximum amount of money you’d put on the line?
What to do after college?
Well written post that will hopefully stir up some good discussion :)
My impression is that LW/EA people prefer to avoid conflict, and when conflict is necessary don’t want to use misleading arguments/tactics (with BS regulations seen as such).
Is the ordering intended to reflect your personal opinions, or the opinions of people around you/society as a whole, or some objective view? Because I’m having a hard time correlating the order to anything in my wold model.
Sounds intriguing! You have a GitHub link? :)
You’d be more likely to get a meaningful response if you sold the article a little bit more. E.g. why would we want to read it? Does it seem particularly good to you? Does it draw a specific interesting conclusion that you particularly want to fact-check?
Presents for impoving rationality or reducing superstition?
How is it that authors get reclassified as “harmful, as happened to Wright and Stross”? Do you mean that later works become less helpful? How would earlier works go bad?
Done. $100 from you vs $1000 from me. If you lose, you donate it to her fund. If I lose, I can send you the money or do with it what you wish.
HP:MoR 82
The two of them did not speak for a time, looking at each other; as though all they had to speak could be said only by stares, and not said in any other way.
Wizard People, Dear Readers
He gives up on using his words and tries to communicate with only his eyes. Oh, how they bulge and struggle to convey unthinkable meaning!
Was there any inspiration?
Being at Harvey Mudd, I’ll definitely attend, though I doubt I can help anyone with transportation :)
From a hacker news thread on the difficulty of finding or making food that’s fast, cheap and healthy.
“Former poet laureate of the US, Charles Simic says, the secret to happiness begins with learning how to cook.”—pfarrell
Reply: “Well, I’m sure there’s some economics laureate out there who says that the secret to efficiency begins with comparative advantage.”—Eliezer Yudkowsky
- 24 Dec 2010 17:23 UTC; 1 point) 's comment on Best of Rationality Quotes 2009/2010 by (
No super detailed references that touch on exactly what you mention here, but https://transformer-circuits.pub/2021/framework/index.html does deal with some similar concepts with slightly different terminology. I’m sure you’ve seen it, though.
Alas, querying counterfactual worlds is fundamentally not a thing one can do simply by prompting GPT.
Citation needed? There’s plenty of fiction to train on, and those works are set in counterfactual worlds. Similarly, historical, mistaken, etc. texts will not be talking about the Current True World. Sure right now the prompting required is a little janky, e.g.:
But this should improve with model size, improved prompting approaches or other techniques like creating optimized virtual prompt tokens.
And also, if you’re going to be asking the model for something far outside its training distribution like “a post from a researcher in 2050”, why not instead ask for “a post from a researcher who’s been working in a stable, research-friendly environment for 30 years”?
I agree I’ve felt something similar when having kids. I’d also read the relevant Paul Graham bit, and it wasn’t really quite as sudden or dramatic for me. But it has had a noticeable effect long term. I’d previously been okay with kids, though I didn’t especially seek out their company or anything. Now it’s more fun playing with them, even apart from my own children. No idea how it compares to others, including my parents.
Came here to post something along these lines. One very extensive commentary with reasons for this is in https://twitter.com/kamilkazani/status/1497993363076915204 (warning: long thread). Will summarize when I can get to laptop later tonight, or other people are welcome to do it.
https://youtu.be/QMqPAM_knrE is a video by one of the authors presenting on this research
The general plan for this month’s meetup is to try to get more people unfamiliar with LW and x-rationality (particularly other HMC students) to come. I’m not sure to what extent this will be successful, but if it is, it would be nice to have some introductory talks about how rationality can have good practical benefits and help you achieve your goals.
I’d encourage people who are planning on coming to have some examples from their own lives of how rationality has been particularly useful.
Sounds similar to what this book claimed about some mental illnesses being memetic in certain ways: https://astralcodexten.substack.com/p/book-review-crazy-like-us