Defeating Moloch, one optimisation at a time | MSc student in AI, currently working on generative modelling for bioinformatics | Machine learning, game theory, personalised medicine | Co-host of Bit of a Tangent podcast and Narrator of Replacing Guilt audiobook | gianlucatruda.com
gianlucatruda
The full audiobook is now available at https://anchor.fm/guilt/episodes/Replacing-Guilt-full-audiobook-e13ct4d/a-a5vrdtu
Thanks for compiling the series like this. I really appreciated being able to read it on my Kindle!
To help make Nate’s ideas even more accessible, I’m currently producing an audio version. It can be found at https://anchor.fm/guilt or by searching “Replacing Guilt Podcast” on all podcast platforms. I intend to make a single audiobook out of it at the end too*.
If you know of people who would benefit from Replacing Guilt, but primarily consume audio instead of reading, please do forward it their way.
*All with Nate’s permission, of course
This is a superb summary! I’ll definitely be returning to this as a cheatsheet for the core ideas from the book in future. I’ve also linked to it in my review on Goodreads.
it’s straightforwardly the #1 book you should use when you want to recruit new people to EA. [...] For rationalists, I think the best intro resource is still HPMoR or R:AZ, but I think Scout Mindset is a great supplement to those, and probably a better starting point for people who prefer Julia’s writing style over Eliezer’s.
Hmm… I’ve had great success with the HPMOR / R:AZ route for certain people. Perhaps Scout Mindset has been the missing tool for the others. It also struck me as a nice complement to Eliezer’s writing, in terms of both substance and style (see below). I’ll have to experiment with recommending it as a first intro to EA/rationality.
As for my own experience, I was delightfully surprised by Scout Mindset! Here’s an excerpt from my review:
I’m a big fan of Julia and her podcast, but I wasn’t expecting too much from Scout Mindset because it’s clearly written for a more general audience and was largely based on ideas that Julia had already discussed online. I updated from that prior pretty fast. Scout Mindset is a valuable addition to an aspiring rationalist’s bookshelf — both for its content and for Julia’s impeccable writing style, which I aspire to.
Those familiar with the OSI model of internet infrastructure will know that there are different layers of protocols. The IP protocol that dictates how packets are directed sits at a much lower layer than the HTML protocol which dictates how applications interact. Similarly, Yudkowsky’s Sequences can be thought of as the lower layers of rationality, whilst Julia’s work in Scout Mindset provides the protocols for higher layers. The Sequences are largely concerned with what rationality is, whilst Scout Mindset presents tools for practically approximating it in the real world. It builds on the “kernel” of cognitive biases and Bayesian updating by considering what mental “software” we can run on a daily basis.
The core thesis of the book is that humans default towards a “soldier mindset,” where reasoning is like defensive combat. We “attack” arguments or “concede” points. But there is another option: “scout mindset,” where reasoning is like mapmaking.
The Scout Mindset is “the motivation to see things as they are, not as you wish they were. [...] Scout mindset is what allows you to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.”
I recommend listening to the audiobook version, which Julia narrates herself. The book is precisely as long as it needs to be, with no fluff. The anecdotes are entertaining and relevant and were entirely new to me. Overall, I think this book is a 4.5/5, especially if you actively try to implement Julia’s recommendations. Try out her calibration exercise, for instance.
Most rationalists are heavily invested into AGI in non-monetary ways — career paths, free time, hopes for longevity/coordination breakthroughs. As other commenters have pointed out, if humanity achieves aligned AGI in the future, financial returns will feasibly be far less important. Given that, maybe the best investments are to bet against AGI as a hedge for humanity not achieving it.
There are 3 futures: If we achieve aligned AGI, we win the game and nothing else matters*. If we achieve misaligned AGI, we die and nothing else matters. If we fail to achieve AGI at all, then we’ve wasted a lot of our time, careers, and hopes. In that case, we want investments to fall back on.
In that 3rd future, what commodities and equities are most successful? Can we buy those now?
*subject to accepting the singularity-like premise.
Okay, I absolutely love this post! In fact, if you were to break it down into three posts, I would probably have been a serious fan of all of them individually.
Firstly, the expected utility formulation of lateness is excellent and explains a lot of my personal behaviour. I’m aggressively early for important events like client meetings and interviews, but consistently tardy when meeting for coffee or arriving for a lecture. Whilst your methodology focussed on unobservable shifts to the time axis, I suspect there are also interesting gains to be made in reshaping the utility curve — for instance, by always carrying reading material, like korin43 mentions in another comment.
Secondly, your approach to self-blinding is fantastic. I do a lot of Quantified-Self research and self-blinding is one of the most challenging and essential components of interventional QS studies. I really like how your protocol builds from the theoretical formulation you created and acts as a convolution on the utility function. I had a little nerdgasm when reading that part!
Thirdly, the fact that you collected and visualised data to evaluate the methodology is outdone only by how pretty your plot is.
Finally, it would be remiss of me not to comment on your excellent use of humour. I chuckled multiple times whilst reading. Expertly balanced and timed to resonate with the tone of the technical content.
Was about to comment the same thing. Saving it to my Wisdom List.
UPDATE: I’ve published the list here: https://gianlucatruda.com/blog/2021/07/08/wisdom.html
Great summary! For those reading the comments, there is a growing Rationalist-oriented community on Clubhouse. Join here: https://www.joinclubhouse.com/club/rationality-live
This is a superb overview! I’ve used Vim for about 2 years now, but I still learned a bunch of things from this post that I didn’t pick up from other cheatsheets or articles.
My 2-cents: Vim itself is powerful as an editor, but I always missed some IDE features. What I’ve come to realise is that the real power of Vim is not the editor, but the keybindings. I installed the Vim extension in VSCode some time ago and have loved the hybrid workflow. Since then, I’ve been gradually incorporating Vim keybindings into all the tools I use for text — like Overleaf for writing papers in LaTeX and Zettlr for writing notes in Markdown. I still use Vim itself for small scripts and quickly editing files. It’s so powerful being able to go between applications and never have to think about what your fingers are doing to transform ideas into output.
One thing I still haven’t quite figured out is in-browser text entry. So far, I haven’t liked the solutions I’ve found, but it’s something I’m looking into for the future. Writing this comment without my usual keybindings is… slow.
- 7 Apr 2021 17:54 UTC; 15 points) 's comment on Vim by (
I present to you VQGANCLIP’s take on a Bob Ross painting of Bob Ross painting Bob Ross paintings 😂 This surpassed my wildest expectations!
I don’t feel like it’s the kind of polished thing I’d put on LW. But here it is on my blog: gianlucatruda.com/blog/2021/07/08/wisdom.html
My Wisdom List: gianlucatruda.com/blog/2021/07/08/wisdom.html
Try joining communities/clubs on topics you’re interested in. Then any rooms started by their members should pop up in your lobby. Also, I’ve heard that following people you’re interested in helps improve the suggestions.
which might happen in 1-2 years and tank crypto-mining completely.
Good point. But that would be a much better time to buy in for long-term value.
I’d love to do that sometime (timezones permitting). I’m @gianlucatruda on Clubhouse.
Seems that there isn’t yet a robust way to share these new communities (that I’ve found). But I’m glad you’re finally in. Looking forward to some future conversations!
I don’t think it’s you. These in-app communities are a brand new feature, so I suspect it’s still a bit buggy. Thanks for letting me know.
Try visit this event link from your phone and then tap on the club name. Does that work?I’ll also try invite you directly from the app.
I just discovered this now, Zvi. It’s such a great heuristic!
I whipped up an interactive calculator version in Desmos for my own future reference, but others might find it useful too: https://www.desmos.com/calculator/pf74qjhzuk
I’ll DM you :)
Apologies for the late reply. Thanks for your kind words and support!
My Replacing Guilt output has been very low lately, but I’ll have some more time flexibility in the near future and will start making progress again.
This is a fascinating strategy and I’m surprised it worked so well. The linked article for the list of questions is paywalled (and NYT).
After a bit of digging, this seems to be the original study for which the questions were formulated: https://journals.sagepub.com/doi/pdf/10.1177/0146167297234003 . The various questions sets are listed in the Appendix that starts on page 12.
And this site seems to be an open-access, interactive mirror of the 36 questions from the NYT article: http://36questionsinlove.com/