Seeking to exchange ideas and learn about what matters.
Alexander
Excessive Nuance and Derailing Conversations
Book Review: Being You by Anil Seth
On Tables and Happiness
I sought a lesson we could learn from this situation, and your comment captured such a lesson well.
This is reminiscent of the message of the Dune trilogy. Frank Herbert warns about society’s tendencies to “give over every decision-making capacity” to a charismatic leader. Herbert said in 1979:
The bottom line of the Dune trilogy is: beware of heroes. Much better rely on your own judgment, and your own mistakes.
I think you are getting at something here, Duncan. I’ve become interested in the following question lately: “How should rationalists conduct themselves if their goal is to promote rationality?” Now, I understand that promoting rationality is not every rationalist’s top priority, hence I stated that condition explicitly.
I’ve been thoroughly impressed by how Toby Ord conducts himself in his writings and interviews. He is kind, respectful, reassuring and most importantly, he doesn’t engage in fear-mongering despite working on x-risks. In his EA interview, he said, “Let us not get into criticising each other for working on the second most important thing.” I found this stunningly thoughtful and virtuous. I find this to be an excellent example of someone going about achieving their goals effectively.
As much as I like Dawkins and love his books, I will admit that his attitude is sometimes unhelpful towards his own goals. I recall hearing (forgot where) that before a debate on spirituality, Dawkins’ interlocutor asked him to read some documents ahead of the discussion. Dawkins showed up having not read the papers and said, “I did not read your documents because I know they are wrong.” [citation needed] Now, this attitude might have been amusing to some in the audience, but, on the whole, this is irrational given the goal is to promote science.
Whenever I engage in motivated reasoning, motivated scepticism or subtle ad hominem in an argument, I can feel it. It feels like I am making a mistake. I feel a vague sense of guilt and confusion in the back of my head and a lump in my throat upon engaging in such conduct. I like the idea of leaning into confusion, which I recall coming across somewhere in the sequences and elsewhere on LessWrong. Still, I would like to become more proficient at actively avoiding these mistakes in the first place.
Since it was posted, I have been closely following the My experience at and around MIRI and CFAR post, but I didn’t know who or what to believe. Anecdotes were being thrown like hotcakes and every which way. Given this confusion, I became more interested in learning a lesson from the situation than picking sides, debunking claims or pointing fingers at culprits.
[Question] How to select a long-term goal and align my mind towards it?
Explanations as Hard to Vary Assertions
Source (emphasis added by me):
Large ground based telescopes can make images as sharp as or sharper than the Hubble Space Telescope, but only if atmospheric blurring is corrected. Previously, the deformable mirrors available to do this were small, flat, and relatively inflexible. They could be used only as part of complex instruments attached to conventional telescopes.
But in this new work, one of the two mirrors that make up the telescope optics is used to make the correction directly. The new secondary mirror makes the entire correction with no other optics required, making for a more efficient and cleaner system.
Like other secondary mirrors, this one is made of glass over 2 feet in diameter and is a steeply curved dome shape. But under the surface, it is like no other. The glass is less than 2 millimeters thick (less than eight-hundredths of an inch). It literally floats in a magnetic field and changes shape in milliseconds, virtually real-time. Electro-magnetically gripped by 336 computer-controlled “actuators” that tweak it into place, nanometer by nanometer, the adaptive secondary mirror focuses star light as steadily as if Earth had no atmosphere. Astronomers can study precisely sharpened objects rather than blurry blobs of twinkling light.
Hello,
My name is Alexander, and I live and work as a software engineer in Australia. I studied the subtle art of computation at university and graduated some years ago. I don’t know the demographics of LessWrong, but I don’t imagine myself unique around here.
I am fascinated by the elegance of computation. It is stunning that we can create computers to instantiate abstract objects and their relations using physical objects and their motions and interactions.
I have been reading LessWrong for years but only recently decided to start posting and contributing towards the communal effort. I am thoroughly impressed by the high-quality standards maintained here, both in terms of the civility and integrity of discussions as well as the quality of software. I’ve only posted twice and have learnt valuable knowledge both times.
My gateway into Rationality has primarily been through reading books. I became somewhat active on Goodreads some years ago and started posting book reviews as a fun way to engage the community and practice critical thinking and idea generation. I quickly gravitated towards Rationality books and binge-read several of them. Rationality and Science books have been formative in shaping my worldview.
Learning the art of Rationality has had a positive impact on me. I cannot prove a causal link, but it probably exists. Several of my friends have commented that conversations with me have brought them clarity and optimism in recent years. A few of them were influenced enough to start frequenting LessWrong and reading the sequences.
I found Rationality: A-Z to be written in a profound and forceful yet balanced and humane way, but most importantly, brilliantly witty. I found this quote from Church vs Taskforce awe-inspiring:
If you’re explicitly setting out to build community—then right after a move is when someone most lacks community, when they most need your help. It’s also an opportunity for the band to grow.
Based on my personal experience, LessWrong is doing a remarkable job building out a community around Rationality. LessWrong seems very aware of the pitfalls that can afflict this type of community.
Over on Goodreads, a common criticism I see of Rationality and Effective Altruism is a fear of cultishness (with the less legitimate critics claiming that Rationality is impossible because Hegel said the nature of reality is ‘contradiction’). Such criticisms tend to be wary of the tendency of such communities towards reinforcing their own biases and applying motivated skepticism towards outsider ideas. However, for what it’s worth, that is not what I see around here. As Eliezer elucidates in Cultish Countercultishness, it takes an unwavering effort to resist the temptation towards cultishness. I hope to see this resistance continuing!
The theory of ‘morality as cooperation’ (MAC) argues that morality is best understood as a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. MAC draws on evolutionary game theory to argue that, because there are many types of cooperation, there will be many types of morality. These include: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. Previous research suggests that these seven types of morality are evolutionarily-ancient, psychologically-distinct, and cross-culturally universal. The goal of this project is to further develop and test MAC, and explore its implications for traditional moral philosophy. Current research is examining the genetic and psychological architecture of these seven types of morality, as well as using phylogenetic methods to investigate how morals are culturally transmitted. Future work will seek to extend MAC to incorporate sexual morality and environmental ethics. In this way, the project aims to place the study of morality on a firm scientific foundation.
Source: https://www.lse.ac.uk/cpnss/research/morality-as-cooperation.
Do you notice your beliefs changing overtime to match whatever is most self-serving? I know that some of you enlightened LessWrong folks have already overcome your biases and biological propensities, but I notice that I haven’t.
Four years ago, I was a poor university student struggling to make ends meet. I didn’t have a high paying job lined up at the time, and I was very uncertain about the future. My beliefs were somewhat anti-big-business and anti-economic-growth.
However, now that I have a decent job, which I’m performing well at, my views have shifted towards pro-economic-growth. I notice myself finding Tyler Cowen’s argument that economic growth is a moral imperative quite compelling because it justifies my current context.
Words cannot possibly express how thankful I am for you doing this!
“Mechanistic” and “reductionist” have somewhat poor branding, and this assertion is based on personal experience rather than rigorous data. Many people I know will associate “mechanistic” and “reductionist” with negative notions, such as “life is inherently meaningless” or “living beings are just machines”, etcetera.
Wording matters and I can explain the same idea using different wording and get drastically different responses from my interlocutor.
I agree that “gears-level“ is confusing to someone unfamiliar with the concept. Naming is hard. A better name could be “precise causal model”.
[Minor spoiler alert] I’ve been obsessed with Dune lately. I watched the movie and read the book and loved both. Dune contains many subtle elements of rationality and x-risks despite the overall mythological/religious theme. Here are my interpretations: the goal of the Bene Gesserit is to selectively breed a perfect Bayesian who can help humanity find the Golden Path. The Golden Path is the narrow set of futures that don’t result in an extinction event. The Dune world is mysteriously and powerfully seductive.
If I recall correctly, I was first introduced to the map-territory meme via LessWrong, and I’ve found it a useful idea in that it has helped me conceptualise the world and my place in it more clearly (as far as I can tell). I hear with great interest that you, too, have found this perspective insightful!
[The following are speculative ramblings.]
I wonder what the limits of map-territory convergence are and what those limits tell us about the limits of intelligence. Is complete convergence possible? Or is the limit determined by computational irreducibility (the idea that you cannot model some systems perfectly, you simply have to watch them unfold to find out what they do)? Is the universe a map that perfectly reflects the territory (itself)? Or is the universe yet another map of a yet deeper reality? I guess these questions belong to the realm of metaphysics.
I just came across Lenia, which is a modernisation of Conway’s Game of Life. There is a video by Neat AI explaining and showcasing Lenia. Pretty cool!
I would love to watch a livestream of a top AI researcher doing their job. I wish someone from MIRI would do that. It would be awesome to get a feel for what AI alignment research is actually like in practice.
On the mating habits of the orb-weaving spider:
These spiders are a bit unusual: females have two receptacles for storing sperm, and males have two sperm-delivery devices, called palps. Ordinarily the female will only allow the male to insert one palp at a time, but sometimes a male manages to force a copulation with a juvenile female, during which he inserts both of his palps into the female’s separate sperm-storage organs. If the male succeeds, something strange happens to him: his heart spontaneously stops beating and he dies in flagrante. This may be the ultimate mate-guarding tactic: because the male’s copulatory organs are inflated, it is harder for the female (or any other male) to dislodge the dead male, meaning that his lifeless body acts as a very effective mating plug. In species where males aren’t prepared to go to such great lengths to ensure that they sire the offspring, then the uncertainty over whether the offspring are definitely his acts as a powerful evolutionary disincentive to provide costly parental care for them.
Thanks for all the excellent writing on economic progress you’ve put out. I completed reading “Creating a Learning Society” by Joseph Stiglitz a few days ago, and I am in the process of writing a review of that book to share here on LessWrong. Your essays are providing me with a lot of insights that I hope to take into account in my review :D
Premise: people are fundamentally motivated by the “status” rewarded to them by those around them.
I have experienced the phenomenon of demandingness described in your post, and you’ve elucidated it brilliantly. I regularly frequent in-person EA events, and I can visibly see status being rewarded according to impact, which is very different from how it’s typically rewarded in the broader society. (This is not necessarily a bad thing.) The status hierarchy in EA communities goes something like this:
People who’ve dedicated their careers to effective causes. Or philosophers at Oxford.
People who facilitate people who’ve dedicated their careers to effective causes, e.g. research analysts.
People who donate 99% of their income to effective causes.
People who donate 98% of their income to effective causes.
...
People who donate 1% of their income to effective causes.
People who donate their time and money to ineffective causes.
People who don’t donate.
People who think altruism is bad.
This hierarchy is very “visible” within the in-person circles I frequent, being enforced by a few core members. I recently convinced a non-EA friend to tag along, and following the event, they said, “I felt incredibly unwelcomed”. Within 5 minutes, one of the organisers asked my friend, “What charities do you donate to?” My friend said, “I volunteer at a local charity, and my SO works in sexual health awareness.” Following a bit of back and forth debate, the EA organiser looked disappointed and said “I’m confused.”, then turned his back on my friend. [This is my vague recollection of what happened, it’s not an exact description, and my friend had pre-existing anti-EA biases.]
Upholding the core principles of EA is necessary. Without upholding particular principles at the expense of the rest, the organisation ceases. However, the thing about optimisation and effectiveness is that if we’re naively and greedily maximising, we’re probably doing it wrong. If we are pushing people away from the cause by rewarding them with low status as soon as we meet them, we will not be winning many allies.
If we reward low status to people who don’t donate as much as others, we might cause these people to halt their donations, quit our game, and instead play a different game in which they are rewarded with relatively more status.
I don’t know how to solve this problem either, and I think it is hard. We can only do so much to “design” culture and influence how status is rewarded within communities. Culture is mostly a thing that just happens due to many agents interacting in a world.
I watched an interview with Toby Ord a while back, and during the Q&A session, the interviewer asked Ord:
Ord’s response was fantastic. He said:
Extending this logic, let‘s not get into criticising people for doing good. We can argue and debate how we can do good better, but let’s not attack people for doing whatever good they can and are willing to do.
I have seen snide comments about Planned Parenthood floating around rationalist and EA communities, and I find them distasteful. Yeah, sure, donating to Malaria saves more lives. But again, the thing about optimisation is that if we are pushing people away from our cause by being parochial, then we’re probably doing a lousy job at optimising.