Tsuyoku Naritai! (I Want To Become Stronger)
In Orthodox Judaism there is a saying: “The previous generation is to the next one as angels are to men; the next generation is to the previous one as donkeys are to men.” This follows from the Orthodox Jewish belief that all Judaic law was given to Moses by God at Mount Sinai. After all, it’s not as if you could do an experiment to gain new halachic knowledge; the only way you can know is if someone tells you (who heard it from someone else, who heard it from God). Since there is no new source of information; it can only be degraded in transmission from generation to generation.
Thus, modern rabbis are not allowed to overrule ancient rabbis. Crawly things are ordinarily unkosher, but it is permissible to eat a worm found in an apple—the ancient rabbis believed the worm was spontaneously generated inside the apple, and therefore was part of the apple. A modern rabbi cannot say, “Yeah, well, the ancient rabbis knew diddly-squat about biology. Overruled!” A modern rabbi cannot possibly know a halachic principle the ancient rabbis did not, because how could the ancient rabbis have passed down the answer from Mount Sinai to him? Knowledge derives from authority, and therefore is only ever lost, not gained, as time passes.
When I was first exposed to the angels-and-donkeys proverb in (religious) elementary school, I was not old enough to be a full-blown atheist, but I still thought to myself: “Torah loses knowledge in every generation. Science gains knowledge with every generation. No matter where they started out, sooner or later science must surpass Torah.”
The most important thing is that there should be progress. So long as you keep moving forward you will reach your destination; but if you stop moving you will never reach it.
Tsuyoku naritai is Japanese. Tsuyoku is “strong”; naru is “becoming,” and the form naritai is “want to become.” Together it means, “I want to become stronger,” and it expresses a sentiment embodied more intensely in Japanese works than in any Western literature I’ve read. You might say it when expressing your determination to become a professional Go player—or after you lose an important match, but you haven’t given up—or after you win an important match, but you’re not a ninth-dan player yet—or after you’ve become the greatest Go player of all time, but you still think you can do better. That is tsuyoku naritai, the will to transcendence.
Each year on Yom Kippur, an Orthodox Jew recites a litany which begins Ashamnu, bagadnu, gazalnu, dibarnu dofi, and goes on through the entire Hebrew alphabet: We have acted shamefully, we have betrayed, we have stolen, we have slandered . . .
As you pronounce each word, you strike yourself over the heart in penitence. There’s no exemption whereby, if you manage to go without stealing all year long, you can skip the word gazalnu and strike yourself one less time. That would violate the community spirit of Yom Kippur, which is about confessing sins—not avoiding sins so that you have less to confess.
By the same token, the Ashamnu does not end, “But that was this year, and next year I will do better.”
The Ashamnu bears a remarkable resemblance to the notion that the way of rationality is to beat your fist against your heart and say, “We are all biased, we are all irrational, we are not fully informed, we are overconfident, we are poorly calibrated . . .”
Fine. Now tell me how you plan to become less biased, less irrational, more informed, less overconfident, better calibrated.
There is an old Jewish joke: During Yom Kippur, the rabbi is seized by a sudden wave of guilt, and prostrates himself and cries, “God, I am nothing before you!” The cantor is likewise seized by guilt, and cries, “God, I am nothing before you!” Seeing this, the janitor at the back of the synagogue prostrates himself and cries, “God, I am nothing before you!” And the rabbi nudges the cantor and whispers, “Look who thinks he’s nothing.”
Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws. This is akin to the principle of not taking pride in confessing your ignorance; for if your ignorance is a source of pride to you, you may become loath to relinquish your ignorance when evidence comes knocking. Likewise with our flaws—we should not gloat over how self-aware we are for confessing them; the occasion for rejoicing is when we have a little less to confess.
Otherwise, when the one comes to us with a plan for correcting the bias, we will snarl, “Do you think to set yourself above us?” We will shake our heads sadly and say, “You must not be very self-aware.”
Never confess to me that you are just as flawed as I am unless you can tell me what you plan to do about it. Afterward you will still have plenty of flaws left, but that’s not the point; the important thing is to do better, to keep moving ahead, to take one more step forward. Tsuyoku naritai!
- How to Beat Procrastination by 5 Feb 2011 18:49 UTC; 273 points) (
- The Importance of Sidekicks by 8 Jan 2015 23:21 UTC; 232 points) (
- Thinking By The Clock by 8 Nov 2023 7:40 UTC; 185 points) (
- A Sense That More Is Possible by 13 Mar 2009 1:15 UTC; 160 points) (
- Make an Extraordinary Effort by 7 Oct 2008 15:15 UTC; 143 points) (
- How I Became More Ambitious by 4 Jul 2013 23:34 UTC; 132 points) (
- Creating a truly formidable Art by 14 Oct 2021 4:39 UTC; 131 points) (
- Guardians of Ayn Rand by 18 Dec 2007 6:24 UTC; 118 points) (
- Say Wrong Things by 24 May 2019 22:11 UTC; 115 points) (
- No Safe Defense, Not Even Science by 18 May 2008 5:19 UTC; 115 points) (
- Shut up and do the impossible! by 8 Oct 2008 21:24 UTC; 108 points) (
- The Sin of Underconfidence by 20 Apr 2009 6:30 UTC; 102 points) (
- Explainers Shoot High. Aim Low! by 24 Oct 2007 1:13 UTC; 100 points) (
- 31 Laws of Fun by 26 Jan 2009 10:13 UTC; 99 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Babble challenge: 50 ways of sending something to the moon by 1 Oct 2020 4:20 UTC; 94 points) (
- How I learned to stop worrying and love skill trees by 23 May 2023 4:08 UTC; 81 points) (
- Taking Ideas Seriously by 13 Aug 2010 16:50 UTC; 81 points) (
- Would You Work Harder In The Least Convenient Possible World? by 22 Sep 2023 5:17 UTC; 81 points) (
- Conversation Halters by 20 Feb 2010 15:00 UTC; 77 points) (
- My Childhood Role Model by 23 May 2008 8:51 UTC; 76 points) (
- The Craft & The Community—A Post-Mortem & Resurrection by 2 Nov 2017 3:45 UTC; 76 points) (
- Why Are Individual IQ Differences OK? by 26 Oct 2007 21:50 UTC; 74 points) (
- Cynical About Cynicism by 17 Feb 2009 0:49 UTC; 72 points) (
- Savanna Poets by 18 Mar 2008 18:42 UTC; 68 points) (
- Recent site changes by 24 Jun 2011 3:50 UTC; 66 points) (
- Reasoning isn’t about logic (it’s about arguing) by 14 Mar 2010 4:42 UTC; 66 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Logan Strohl on exercise norms by 30 Mar 2021 4:28 UTC; 64 points) (
- Of Gender and Rationality by 16 Apr 2009 0:56 UTC; 62 points) (
- Where Philosophy Meets Science by 12 Apr 2008 21:21 UTC; 61 points) (
- The Meaning of Right by 29 Jul 2008 1:28 UTC; 61 points) (
- Learning critical thinking: a personal example by 14 Feb 2013 20:43 UTC; 59 points) (
- About Less Wrong by 23 Feb 2009 23:30 UTC; 57 points) (
- Free to Optimize by 2 Jan 2009 1:41 UTC; 56 points) (
- Extenuating Circumstances by 6 Apr 2009 22:57 UTC; 56 points) (
- The Science of Cutting Peppers by 12 Sep 2010 13:14 UTC; 54 points) (
- AI #45: To Be Determined by 4 Jan 2024 15:00 UTC; 52 points) (
- Why I’m Blooking by 15 Sep 2007 17:49 UTC; 49 points) (
- Rationality Lessons in the Game of Go by 21 Aug 2010 14:33 UTC; 49 points) (
- You Provably Can’t Trust Yourself by 19 Aug 2008 20:35 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- My Greatest Achievement by 12 Sep 2011 19:26 UTC; 47 points) (
- Improving Enjoyment and Retention Reading Technical Literature by 7 Aug 2013 6:29 UTC; 46 points) (
- Debiasing as Non-Self-Destruction by 7 Apr 2007 20:20 UTC; 46 points) (
- Final Babble Challenge (for now): 100 ways to light a candle by 12 Nov 2020 23:17 UTC; 45 points) (
- Amputation of Destiny by 29 Dec 2008 18:00 UTC; 45 points) (
- Verifying Rationality via RationalPoker.com by 25 Mar 2011 16:32 UTC; 45 points) (
- Play in Hard Mode by 26 Aug 2017 19:00 UTC; 43 points) (
- Less Wrong Book Club and Study Group by 9 Jun 2010 17:00 UTC; 43 points) (
- Lighthaven Sequences Reading Group #2 (Tuesday 09/17) by 8 Sep 2024 21:23 UTC; 40 points) (
- A Protocol for Optimizing Affection by 30 May 2012 0:38 UTC; 39 points) (
- Rationalist Storybooks: A Challenge by 18 Mar 2009 2:25 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- In praise of gullibility? by 18 Jun 2015 4:52 UTC; 36 points) (
- Optimizing the Twelve Virtues of Rationality by 9 Jun 2015 3:08 UTC; 35 points) (
- Let’s be friendly to our allies by 15 Aug 2012 4:02 UTC; 35 points) (
- Babble challenge: 50 ways of solving a problem in your life by 22 Oct 2020 4:49 UTC; 34 points) (
- How meditation helps me be a more effective altruist by 23 Sep 2019 15:09 UTC; 33 points) (EA Forum;
- In What Ways Have You Become Stronger? by 15 Mar 2009 20:44 UTC; 31 points) (
- Babble challenge: 50 ways of hiding Einstein’s pen for fifty years by 15 Oct 2020 7:23 UTC; 31 points) (
- Babble challenge: 50 ways to escape a locked room by 8 Oct 2020 5:13 UTC; 30 points) (
- My Strange Beliefs by 30 Dec 2007 12:15 UTC; 29 points) (
- Vipassana Meditation: Developing Meta-Feeling Skills by 18 Oct 2010 16:55 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Escaping Your Past by 22 Apr 2009 21:15 UTC; 28 points) (
- The Importance of Self-Doubt by 19 Aug 2010 22:47 UTC; 28 points) (
- Self-Responsibility by 21 Jun 2021 16:17 UTC; 27 points) (
- LW2.0: Community, Culture, and Intellectual Progress by 19 Jun 2019 20:25 UTC; 27 points) (
- Proposal: Anti-Akrasia Alliance by 1 Jan 2011 21:52 UTC; 26 points) (
- What Motte and Baileys are rationalists most likely to engage in? by 6 Sep 2021 15:58 UTC; 25 points) (
- Learning how to explain things by 27 Jun 2011 20:28 UTC; 25 points) (
- 22 Aug 2010 0:27 UTC; 25 points) 's comment on Transparency and Accountability by (
- 7 Sep 2010 20:38 UTC; 24 points) 's comment on A “Failure to Evaluate Return-on-Time” Fallacy by (
- Babble challenge: 50 consequences of intelligent ant colonies by 29 Oct 2020 7:21 UTC; 23 points) (
- Risk-Free Bonds Aren’t by 22 Jun 2007 22:30 UTC; 23 points) (
- The Conversations We Make Space For by 28 Jul 2022 21:36 UTC; 22 points) (EA Forum;
- 10 Dec 2012 23:29 UTC; 22 points) 's comment on Replaceability as a virtue by (
- Honest Friends Don’t Tell Comforting Lies by 19 Apr 2018 16:34 UTC; 21 points) (
- The Conversations We Make Space For by 28 Jul 2022 21:37 UTC; 21 points) (
- Johannes’ Biography by 3 Jan 2024 13:27 UTC; 21 points) (
- Why Improving Dialogue Feels So Hard by 20 Jan 2024 21:26 UTC; 21 points) (
- Reflective Consequentialism by 18 Nov 2022 23:56 UTC; 21 points) (
- Let’s all learn stats! by 12 Jun 2012 15:31 UTC; 19 points) (
- How to get the most out of the next year by 22 Dec 2011 21:23 UTC; 19 points) (
- Zen and Rationality: Skillful Means by 21 Nov 2020 2:38 UTC; 18 points) (
- Designing serious games—a request for help by 22 Mar 2011 11:29 UTC; 18 points) (
- 9 Dec 2014 20:16 UTC; 17 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- 15 May 2011 19:22 UTC; 16 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- Book Review: Spark! How exercise will improve the performance of your brain by 1 Oct 2021 22:43 UTC; 16 points) (
- How to Not Get Offended by 23 Mar 2013 23:12 UTC; 16 points) (
- 3 Oct 2013 7:33 UTC; 15 points) 's comment on Systematic Lucky Breaks by (
- 15 Oct 2011 21:01 UTC; 15 points) 's comment on Don’t call yourself a rationalist. by (
- 31 Dec 2014 23:21 UTC; 15 points) 's comment on Identity crafting by (
- Babble Challenge: 50 Ways to Overcome Impostor Syndrome by 19 Mar 2021 18:21 UTC; 15 points) (
- 9 Dec 2011 4:52 UTC; 15 points) 's comment on Value evolution by (
- 6 Jul 2009 20:02 UTC; 14 points) 's comment on The Dangers of Partial Knowledge of the Way: Failing in School by (
- Walkthrough of ‘Formalizing Convergent Instrumental Goals’ by 26 Feb 2018 2:20 UTC; 13 points) (
- 22 Apr 2011 18:43 UTC; 13 points) 's comment on Epistle to the New York Less Wrongians by (
- 2 Mar 2010 19:44 UTC; 13 points) 's comment on Rationality quotes: March 2010 by (
- 24 Sep 2023 9:38 UTC; 13 points) 's comment on Would You Work Harder In The Least Convenient Possible World? by (
- Should we stop using the term ‘Rationalist’? by 29 May 2020 15:11 UTC; 12 points) (
- 24 Jul 2012 18:41 UTC; 11 points) 's comment on Less Wrong fanfiction suggestion by (
- 20 Jul 2014 8:18 UTC; 11 points) 's comment on [LINK] Another “LessWrongers are crazy” article—this time on Slate by (
- 23 Oct 2011 9:12 UTC; 11 points) 's comment on Rationality Quotes October 2011 by (
- [SEQ RERUN] Tsuyoku Naritai! (I Want To Become Stronger) by 18 May 2011 16:11 UTC; 11 points) (
- 27 Feb 2009 23:02 UTC; 11 points) 's comment on The Most Important Thing You Learned by (
- 26 Jul 2011 23:44 UTC; 11 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 8 May 2023 19:06 UTC; 11 points) 's comment on The Apprentice Thread 2 by (
- Ideas for rationality slogans? by 19 Sep 2015 23:32 UTC; 10 points) (
- 22 Apr 2011 14:21 UTC; 10 points) 's comment on [SEQ RERUN] The Proper Use of Humility by (
- 22 Nov 2010 15:01 UTC; 10 points) 's comment on Does cognitive therapy encourage bias? by (
- Play for a Cause by 28 Jan 2010 20:52 UTC; 10 points) (
- Dissolution of free will as a call to action by 24 May 2011 12:41 UTC; 10 points) (
- Gaming Incentives by 29 Jul 2021 13:51 UTC; 10 points) (
- 6 Nov 2011 18:54 UTC; 9 points) 's comment on Less Wrong and non-native English speakers by (
- My Hello World by 17 Oct 2018 4:47 UTC; 9 points) (
- The Contrarian Sequences by 27 Aug 2017 19:33 UTC; 9 points) (
- 20 May 2021 4:59 UTC; 9 points) 's comment on Re: Fierce Nerds by (
- 21 Jul 2013 17:20 UTC; 8 points) 's comment on Open thread, July 16-22, 2013 by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 27 Jun 2011 12:42 UTC; 8 points) 's comment on Discussion: Yudkowsky’s actual accomplishments besides divulgation by (
- 9 Jun 2010 16:06 UTC; 7 points) 's comment on Open Thread June 2010, Part 2 by (
- 5 Aug 2011 22:43 UTC; 7 points) 's comment on Teenage Rationalists and Changing Your Mind by (
- Roundabout Strategy by 28 Jan 2021 0:44 UTC; 7 points) (
- 30 Oct 2012 6:04 UTC; 7 points) 's comment on How to Deal with Depression—The Meta Layers by (
- 3 Sep 2010 13:30 UTC; 7 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- An inducible group-”meditation” for use in rationality dojos by 2 Jan 2012 10:32 UTC; 7 points) (
- 17 Mar 2012 19:22 UTC; 6 points) 's comment on What are YOU doing against risks from AI? by (
- 3 Aug 2010 17:08 UTC; 6 points) 's comment on Rationality quotes: August 2010 by (
- 5 Dec 2023 21:25 UTC; 6 points) 's comment on Critique-a-Thon of AI Alignment Plans by (
- 9 Jan 2016 21:40 UTC; 6 points) 's comment on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) by (
- Rationality Reading Group: Part Y: Challenging the Difficult by 20 Apr 2016 22:32 UTC; 5 points) (
- 3 Mar 2011 11:45 UTC; 5 points) 's comment on What Else Would I Do To Make a Living? by (
- 16 Feb 2015 10:54 UTC; 5 points) 's comment on Wisdom for Smart Teens—my talk at SPARC 2014 by (
- 1 Dec 2009 4:39 UTC; 5 points) 's comment on Rationality Quotes November 2009 by (
- 18 Oct 2020 17:54 UTC; 4 points) 's comment on [Link] “Where are all the successful rationalists?” by (EA Forum;
- 14 Dec 2011 1:00 UTC; 4 points) 's comment on How to Not Lose an Argument by (
- 12 May 2010 0:21 UTC; 4 points) 's comment on 37 Ways That Words Can Be Wrong by (
- 7 Apr 2012 5:30 UTC; 4 points) 's comment on Rationally Irrational by (
- Don’t walk through the fire! Walk through the fire! by 11 Aug 2021 19:38 UTC; 4 points) (
- 17 Feb 2015 21:59 UTC; 4 points) 's comment on Uncategories and empty categories by (
- 17 Nov 2010 19:04 UTC; 4 points) 's comment on Theoretical “Target Audience” size of Less Wrong by (
- 27 Jul 2009 2:23 UTC; 4 points) 's comment on The Nature of Offense by (
- 15 Sep 2007 21:54 UTC; 4 points) 's comment on Why I’m Blooking by (
- 27 Dec 2014 15:12 UTC; 3 points) 's comment on 2015 New Years Resolution Thread by (
- 12 Jun 2012 16:05 UTC; 3 points) 's comment on Let’s all learn stats! by (
- 22 May 2012 6:32 UTC; 3 points) 's comment on [SEQ RERUN] Class Project by (
- 3 Jul 2011 4:18 UTC; 3 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 19 Jul 2012 16:17 UTC; 2 points) 's comment on Reply to Holden on The Singularity Institute by (
- 15 Mar 2019 7:42 UTC; 2 points) 's comment on Blegg Mode by (
- 7 Aug 2013 23:48 UTC; 2 points) 's comment on Improving Enjoyment and Retention Reading Technical Literature by (
- Proposal: Anti-Akrasia Alliance by 1 Jan 2011 21:31 UTC; 2 points) (
- Altruism is Incomplete by 20 Aug 2017 16:29 UTC; 2 points) (
- 10 Mar 2015 5:40 UTC; 2 points) 's comment on What Scarcity Is and Isn’t by (
- 2 Oct 2012 9:21 UTC; 2 points) 's comment on Open Thread, October 1-15, 2012 by (
- 2 Nov 2007 18:55 UTC; 2 points) 's comment on An Alien God by (
- 12 Nov 2013 23:57 UTC; 2 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- 18 Dec 2010 0:37 UTC; 1 point) 's comment on Turing Test Tournament For Funding by (
- Meetup : San Diego experimental meetup by 28 Dec 2011 22:02 UTC; 1 point) (
- 19 Apr 2013 22:52 UTC; 1 point) 's comment on What truths are actually taboo? by (
- The Craft And The Community: Wealth And Power And Tsuyoku Naritai by 23 Apr 2012 16:06 UTC; 1 point) (
- Meetup : Saint Petersburg. Why rationality? by 3 Nov 2013 13:30 UTC; 1 point) (
- 5 Mar 2015 6:47 UTC; 1 point) 's comment on What Scarcity Is and Isn’t by (
- 6 Mar 2013 10:33 UTC; 0 points) 's comment on Rationality Quotes March 2013 by (
- 7 Dec 2010 23:32 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 14 Nov 2010 5:58 UTC; 0 points) 's comment on Hi—I’m new here—some questions by (
- 27 Sep 2011 7:34 UTC; 0 points) 's comment on Something to Protect by (
- Believe in Yourself and don’t stop Improving by 25 Apr 2023 22:34 UTC; 0 points) (
- A Call For Community: Scientific Language Learning is Still Language Learning by 14 Sep 2023 0:32 UTC; 0 points) (
- 16 Oct 2009 20:26 UTC; 0 points) 's comment on Open Thread: October 2009 by (
- 31 Oct 2011 22:18 UTC; -1 points) 's comment on Rationality Quotes November 2011 by (
- 2 Oct 2007 22:39 UTC; -2 points) 's comment on A Rational Argument by (
- 24 Nov 2011 6:36 UTC; -3 points) 's comment on [Infographic] A reminder as to how far the rationality waterline can climb (at least, for the US). by (
- Confronting the Mindkiller—a series of posts exploring Political Landscape (Part 1) by 2 Jun 2012 20:17 UTC; -3 points) (
- [link] Apostles’ Creed = Tsuyoku Naritai??? by 23 Aug 2011 14:49 UTC; -5 points) (
- Akrasic Reasoning by 5 Aug 2011 20:22 UTC; -7 points) (
- A response to “Torture vs. Dustspeck”: The Ones Who Walk Away From Omelas by 30 Nov 2011 3:34 UTC; -8 points) (
- Ontologial Reductionism and Invisible Dragons by 20 Mar 2012 2:29 UTC; -15 points) (
“Look who thinks he’s nothing”—funny. :) Perhaps more general version of your point is beware of taking pride in subgoal measures of accomplishment, if subgoals without the goal are worth little.
Actually I think I tend to do the opposite. I undervalue subgoals and then become unmotivated when I can’t reach the ultimate goal directly.
E.g. I’m trying to get published. Book written, check. Query letters written, check. Queries sent to agents, check. All these are valuable subgoals. But they don’t feel like progress, because I can’t check off the book that says “book published”.
I hope that you are not still struggling with this, but for anyone else in this situation: I would think that you need to change the way you set your goals. There is loads of advice out there on this topic, but there’s a few rules I can recall off the top of my head:
“If you formulate a goal, make it concrete, achievable, and make the path clear and if possible decrease the steps required.” In your case, every one of the subgoals already had a lot of required actions, so the overarching goal of “publish a book” might be too broadly formulated.
“If at all possible don’t use external markers for your goals.” What apparently usually happens is that either you drop all your good behaviour once you cross the finish line, or your goal becomes/reveals itself to be unreachable and you feel like you can do nothing right (seriously, the extend to which this happens… incredible.), etc.
“Focus more on the trajectory than on the goal itself.” Once you get there, you will want different things and what you have learned and acquired will just be normal. There is no permanent state of “achieving the goal”, there is the path there, and then the path past it.
Very roughly speaking.
All the best.
Now there’s a sentiment I can get behind. That’d make a nice hachimaki… http://www.jbox.com/SEARCH/zettai/1/
I sometimes wonder about that: As you move away from a point charge, the electric field falls off as 1/r^2. Infinitely long line charges fall off as 1/r, and infinite plates (with a uniform charge distribution) theoretically generate electric fields that are constant, with respect to distance from that plane. Though you are moving away from it, its influence on you doesn’t lessen.
Is irrational behavior the same way? One of the mechanisms that allows fortune-telling and astrology to “work” is the mind’s ability to recast perception to fit just about any description: “Well, I guess maybe I waaas a bit selfish, I could have donated more to that charity. Gosh, my horoscope sure is accurate!” I wonder if there is some degree of that as well with regards to irrational behavior. Of course I’m not suggesting that it doesn’t exist, or can’t be identified, but I’m looking at the far off case: behavior that cannot be identified as irrational and biased through any kind of mental algebra at all. I wonder if such a thing exists, and what can be done to achieve it.
That’s a great saying about the angels and donkeys. I’ve read that most ancient civilizations had the same kind of view of history. They did not have our notion of progress; rather, they saw mankind as having fallen from a primordial “golden age”, and heading pretty much straight downhill ever since. No doubt this was aided by the near-universal agreement among old people that the young generation just doesn’t measure up to how people were in the old days.
So if we go back to the “chronophone” thought experiment, Archimedes might have been spectacularly uninterested in information from the future (especially through such a garbled connection). Unlike today where we would assume that future civilizations would be sources of tremendous knowledge and wisdom, he would have imagined a future of near-bestial creatures who had long lost whatever vestiges of grace mankind had still retained in his age.
Archimedes had direct evidence of adding to the progress of useful knowledge over generations. Even in that age, scientists were an exception to the rule.
I can corroborate that. Indian Hindus believe that there are eons (longer) and numerous eras (shorter) consisting of 4 “yuga”s, during each of which humans generally become worse off… all great traits are part of the first yuga, and goes downhill to the last one (in which we exist, obviously). After each era, a “pralaya” takes place destroying everything. Then start afresh.
Sigh.
Though he might change his mind as we explained how to cure a whole bunch of diseases he thought were intractable.
Through a chronophone? Wouldn’t that just repeat the nonsense ancient doctors believed, and cures to diseases he already knows how to deal with?
OTOH, there was this accumulation of Talmud, with later commentaries continuing to be added, Mishnah, and on and on. So, one can argue that there was this degradation function as one moves further away from the original source, but this is presumably at least partly offset by the accumulation of the commentaries themselves. Do they accumulate more rapidly than the degradation occurs?
BTW, there is something similar in the debates over the various Islamic law codes, the various Shari’as. An issue is which of the reputed sayings of the Prophet Muhammed, collectively known as the Hadith, are to be accepted as genuine and therefore to serve as part of the foundation of a proper Shari’s (along with the Qur’an and some other things). The validity of a given saying was based on a chain of witnesses: Abdul heard it from Abdullah who heard it from Abu-Bakr who heard the Prophet, and so forth. Part of the argument is that the longer this chain of reputed witnesses is, the less reliably a part of the Hadith the supposed saying is, and indeed, some sayings are accepted in some Shari’as, while the stricter ones rule them out for having overly long chains of witnesses. The strictest of the Sunni Shari’as is the one accepted in Saudi Arabia, the Hanbali, which accepts only the Qur’an and a very small Hadith as bases for the law.
Hal: I’d like to see a cartoon of a timeline that goes from nothingness to probabilities to subatomic particles to … to humans … to AI controlled sentient galaxies … to discorporated particles floating around in a post heat-death universe...
...all claiming to miss the good old days.
I don’t think I would say that the “good old days” belief aided the “hell in a handbasket” belief; I think they are one and the same.
Barkley, the accumulation of Talmud was based on the theory that—I know this will sound strange, but bear with me—the younger rabbis were all simply writing down things that older rabbis had told them. In the Orthodox view the Talmud is the “Torah sheh b’al’Peh”, the Oral Torah, which was also given to Moses at Mount Sinai, and then transmitted verbally down through the generations until it was finally written down. All law in the Talmud is supposed to have been transmitted from Mount Sinai—there’s nowhere else that wisdom can come from. If there are disputes in the commentaries, then they’re both right, and the task of future generations is to figure out how they can both be right, because you can never say an older rabbi is wrong, because they’re closer to Mount Sinai than you. The fact that much of the law in the Mishna or Gemara is blatantly medieval or blatantly based on incorrect medieval beliefs is somehow just not thought about.
Some minor comments regarding Eliezer’s remark. The emphasis on non-contradiction of opinions in the Talmud and elsewhere is fairly recent. Maimonides for example was more than willing to say that statements in the Talmud were wrong when it came to factual issues. Also note that much of the Talmud was written before the medieval period (the Mishna dates to around 200 and the Gemara was completed around 600 or so only very early in to the medieval period).
The notion of the infallibility of the Talmud is fairly recent gaining real force with the writings of the Maharal in the late 1500s. In fact, many Orthodox Jews don’t realize how recent that aspect of belief is. The belief in the infallible and non-contradictiory nature of the Talmud has also been growing stronger in some respects. Among the ultra-Orthodox, they are starting to apply similar beliefs to their living or recently deceased leaders and the chassidim have been doing something similar with their rebbes for about 200 years. Currently, there are major charedi leaders who have stated that mice can spontaneously generate because the classical sources say so. I have trouble thinking of a better example of how religion can result in serious misunderstandings about easily testable facts.
And speaking of bias, I find myself wanting to blame the belief in an infallible Talmud on fundamentalism-envy, but it just doesn’t fit the timeline.
Everyone claims these days that canonical “literalism” is a recent phenomenon. It’s said about Islam especially and now this comment claims it about Judaism. I’ve also heard this about the Greek religions (there’s a book called ‘Did the Ancients believe in their myths’). Is this really true? Or is this some kind of post-modern thing where everyone is trying to prove how much “wiser” our ancestors were as if they weren’t literal idiots.
I think the common sense intuition is that literalism&fundamentalism must have been more prevalent in the past, but I’m willing to update if anyone can demonstrate some kind of trend in any of these religions.
The case of Maimonides is well-discussed in Persecution and the Art of Writing by Leo Strauss. Maimonides considers it bad to teach the secrets of the Talmud to people who aren’t worthy and thinks that the Talmud contains wrong statements to mislead naive readers.
Issues of secret knowledge and mechanisms to keep knowledge from getting picked up by people are found in many spiritual traditions.
There a key distinction between esoteric and exoteric works. Reading esoteric works literally usually means to treat them as being exoteric.
If you look at someone like Richard Bandler who founded NLP, Bandler often tries to teach esoterically whereby he’s not explicit about what he wants to teach. If you understand how he teaches than you won’t take a story about a personal experience that Bandler recounts as literal but as a vehicle for the transmission of esoteric knowledge.
When Maimonides wanted to teach esoterically he also argues that the esoteric knowledge is more important than the literal truth. Maimonides is likely making a lot of decisions that are different when he teaches that are different from those that Bandler makes, but both consider esoteric knowledge to be important.
People who value exoteric knowledge like Greek philosophers or modern scientists tend to be a lot more literal than people who value esoteric knowledge. Especially at the level of teachers. That doesn’t necessarily mean that the average lay-person understands that certain claims about knowledge aren’t to be taken literally.
I think conflating literalism and fundamentalism here is probably a bad idea. I am not an expert in the early history of the Abrahamic religions, but it seems likely that textual literalism’s gone in and out of style over the several thousand years of Abrahamic history, just as many other aspects of interpretation have.
Fundamentalism is a different story. There have been several movements purporting to return to the fundamentals of religion, but in current use the word generally refers only to the most recent crop of movements, which share certain characteristics because they share a common origin: they are reactions against modernity and against the emerging universal culture. It stands to reason that these characteristics would be new (at least in this form), because prior to them there was no modernity or universal culture to react against.
I think it’s more useful to speak of fundamentalism as an attitude, and if you speak about it this way, there is nothing new about it, but it always exists in opposition to something different—e.g. the 1st century Sadducees were fundamentalists, and the Pharisees, who tended to interpret their religion in the light of Greek philosophy, were mostly opposite to this.
I don’t know of any broader, larger trends. It is worth noting here that the Rabbis of the Talmud themselves thought that the prior texts (especially the Torah itself) were infallible, so it seems that part of what might be happening is that over time, more and more gets put into the very-holy-text category.
Also, it seems important to distinguish here between being unquestionably correct with being literal. In a variety of different religions this becomes an important distinction and often a sacrifice of literalism is in practice made to preserve correctness of a claim past a certain point. Also note that in many religious traditions, the traditions which are most literal try to argue that what they are doing is not literalism but something more sophisticated. For example, among conservative Protestants it isn’t uncommon to claim that they are not reading texts literally but rather using the “historical-grammatical method.”
The Talmud from what little I know may be a poor example of this. In fact, last I checked the Torah came from a combination of contradictory texts, and tradition comes close to admitting this with the story of Ezra.
I think most people in ancient times held all sorts of beliefs about the world which we would call “literalist” if someone held them today, but they rarely if ever believed in the total accuracy of one source. They believed gods made the world because that seemed like a good explanation at the time. They may have believed in the efficacy of sacrifice, because why wouldn’t you want sacrifices made to you?
“If there are disputes in the commentaries, then they’re both right” I know this is a derailment but I wish somebody had told me that this is how it was supposed to work! I was so confused when I was trying to learn. It didn’t seem to make any sense. Now it makes even less sense.
How would you compare this “tsuyoku naritai” viewpoint with the majoritarian perspective? The majoritarian view is skeptical about the possibility of overcoming bias on an individual basis, similar to the position you criticize of being “loathe to relinquish ignorance” on the basis of evidence and argumentation. But majoritarianism is not purely fatalistic, in that it offers an alternative strategy for acquiring truth, by seeing what other people think.
I think majoritarianism is ultimately opposed to tsuyoku naritai, because it prevents us from ever advancing beyond what the majority believes. We rely upon others to do knowledge innovation for us, waiting for the whole society to, for example, believe in evolution, or understand calculus, before we will do so.
Interesting post. Judaism has managed to survive for thousands of years, and maybe part of that is a high copying fidelity for its memes. It seems there are two ways for cultures to ensure long-term survival—extreme rigidity (as in this case) or in extreme adaptability (which is better at learning but may not be able to preserve group identity).
Not sure what that has to do with overcoming bias, except to suggest that it may be in a culture’s interest to maintain their biases.
And what’s weird is that when Judaism historically encountered the Enlightenment, it resulted in people who are notably smart and adaptable as individuals and as a group.
On the one hand, Judaism (and other traditional religions) accumulate experience that is post-dated to the origin of the religion. On the other hand, when parts of a traditional religion admit that experience can accumulate, the fact that change is actually possible frequently turns into a belief that change is possible at will and you eventually wind up with a “trendier-than-thou” religion.
You can compare this phenomenon to fiat currencies. Gold (or whatever the standard happens to be) might be an arbitrary sign of value, but it’s a mistake to think that currency can be changed at will.
Hertzlinger, I would summarize your comment as “Once you’ve got religion, you’ve got choice of two different ways to screw up.” It’s not as if there’s anything good about a religion persisting for centuries. Imagine if a cult of 17th-century physicists were still running around.
Finney, I do indeed think there’s a conflict between tsuyoku naritai and majoritarianism. Suppose everyone were a majoritarian—information would degrade from generation to generation, as the “average belief” changed a little in transmission. (Where did all that information come from in the first place? Not from majoritarian reasoning.) Further, if you’re a majoritarian, once you achieve the level of the average, you hit a brick wall—you’re not allowed to aspire to anything above that. Hopefully the reasons for my strong negative reaction to majoritarianism are now clearer.
I don’t mean to hijack this thread but I’ll offer a couple of ideas about majoritarianism. It is no doubt true that if everyone were a majoritarian, majoritarians would have to do things a little differently (perhaps asking people to publish their estimates of what they would believe if they weren’t following the advice of the crowd). But at present I don’t think this is a major problem, so majoritarianism still has promise as a strategy to improve one’s accuracy, as demanded by the tsuyoku naritai philosophy.
As far as being unable to beat the average, again this is true but keep in mind that for many kinds of problems, the average is really very good. For example in “guess how many beans in the jar” type problems, it is customary for the mean guess to be far better than the median, often in the top few percentiles. Few strategies can offer such high degrees of accuracy. Although not all problems can be quantified in this way, the point is that the majoritarian “average” does not have to mean median.
Finney, I do indeed think there’s a conflict between tsuyoku naritai and majoritarianism.
I don’t think that’s automatic. If you do truly believe that the mean opinion is more reliable in general than any you could construct on your own, then moving towards that mean is something that makes you better. And if you just take majoritism as a guide, rather than a dogma, there’s even less problems.
The fact that if everyone did this, it would be a disaster may be an example of what I called moral freeloading—something that may be good for an individual to do, alone, but that would be very dangerous for everyone to imitate.
Here’s a citation for my 2nd claim above about the accuracy of the mean:
www.leggmason.com/funds/knowledge/mauboussin/ExplainingWisdom.pdf
“We now turn to the second type of problem, estimating a state. Here, only one person knows the answer and none of the problem solvers do. A classic example of this problem is asking a group to guess the number of jelly beans in a jar. We have been doing this experiment for over a decade at Columbia Business School, and the collective answer has proven remarkably accurate in most trials....
“Our 2007 jelly bean results illustrate the point. The average guess of the class was 1,151 while the actual number of beans was 1,116, a 3.1 percent error. Of the 73 estimates, only two were better than the average… There’s nothing unique about 2007; the results are the same year after year.”
Ah, Finney made the same point I was making, and cunningly posted it first… ^_^
One small point: If we truly want to become stronger, then we should always test our abilities against reality—we should go out on a limb and make specific predictions and then see how they pan out, rather than retreating into the “it’s complicated, so let’s just conclude that we’re not qualified to decide”. That’s an error I’ve often sliped into, in fact...
There seem to be lots of parallels between majoritarianism and the efficient market hypothesis in finance. In the efficient market hypothesis, it is entirely possible that a liquidly traded asset is mispriced, (similar to the possibility that the majority view is very wrong) however on average, according to the efficient market view, I maximise my chances of being right by accepting the current price of the asset as the correct price. Therefore the fact that a stock halved in price over a year is not a valid criticism of the efficient market theory, just as in majoritarianism the fact that the majority has been proven wrong is not a valid criticism of majoritarianism. The problem of free loading is inherent in the efficient market theory, if everyone accepts it then the market no longer is efficient. But there are enough people who have justified reasons not to invest on an efficient market basis to ensure that this does not happen as discussed below.
Some examples of justified reasons in differing from the majority view in the case of efficient markets are; 1) In the efficient market theorem, it is accepted that people with inside information can have successful trading strategies which deliver predictably above average returns. In the case of majoritarianism we would be justified holding a different view to the majority if we had inside information that the majority did not have (for instance we know the colour of someone eyes, when we know the majority do not). 2) A professional money manager of a non-index mutual fund is also justified in differing from the majority view since this is what he is paid to do. The parallel here for majoritarianism would be scientists or academics, who are paid to advance new theories, they receive compensation for differing from the majority, at least in their own area of speciality. 3) The final area where you might differ from the efficient market approach is when you gain some entertainment utility from investing (i.e. as a form of gambling, if I am honest, this is why I invest in single stocks). In the case of majoritarianism, the parallel is that you can chose to hold a view that is different from the majority if it brings you entertainment utility which outweighs the costs of holding a non-efficient view (religious views might be in this category).
Actually, realizing this parallel causes me to be even more dubious of the efficient market hypothesis.
As compelling as it may sound when you say it, this line or reasoning plainly doesn’t work in scientific truth… so why should it work in finance?
Behavioral finance gives us plenty of reasons to think that whole markets can remain radically inefficient for long periods of time. What this means for the individual investor, I’m not sure. But what it means for the efficient market hypothesis? Death.
The thing to keep in mind is that a perfectly efficient market is like an ideal gas. It’s a useful tool for thinking about what’s likely to happen if you go messing with variables, but it basically never actually exists in nature.
We use markets in real life not because they’re perfect, but because, on average, they get a more correct answer more often and for less effort than any other system we know of.
Could there be something better? Of course. We just haven’t discovered it yet.
Are there situations where, in hindsight, we can see that some other system would have performed better than a market? Yup. Hindsight’s awesome that way.
Can we predict well in advance when to use some other system? Not particularly. And if we could then that ability would become part of the market, so the market would still be likely to perform better when used globally.
So yeah, markets can remain horribly inefficient for a long time under some circumstances. Just remember that the same things that keep a market inefficient will likely also cause mistakes by other methods of calculation. So when you switch away from the market you’re basically going double-or-nothing and the odds generally aren’t in your favor.
There’s no particular reason that constant improvement needs to surpass a fixed point. In theory, see Achilles and the tortoise. In practice, maybe you can’t slice things infinitely fine (or at least you can’t detect progress when you do), but still you could go on for a very long time incrementally improving military practice in the Americas while, without breakthroughs to bronze and/or cavalry, remaining solidly stuck behind Eurasia. More science fictionally, people living beneath the clouds of Venus could go for a long time incrementally improving their knowledge of the universe before catching up with Babylonian astronomy, and if a prophet from Earth brought them a holy book of astronomy, it could remain a revelation for a very long time. Or if the Bible had included a prophesy referring to “after three cities are destroyed with weapons made of metals of weight 235 and 239,” it would’ve remained utterly opaque through centuries of rapid incremental progress.
I think a related argument would be more convincing: collect incidents when people thought they knew something about the real world from a religious tradition, and it conflicted with what the scientists were coming to believe, and compute a batting average. If the batting average is not remarkably high for the religious side, some skepticism about its reliable truth is called for, or at least some diplomatic dodge like “how to go to heaven, not how the heavens go.”
The batting average could suffer from selection bias if the summaries tend to be written by one side. But even if so, it’s sorta interesting indirect evidence if all the summaries tend to be written by one side. And I dimly remember that there are pro-Islam writers who go on about the scientific things that their religious tradition got right, so I don’t think there’s any iron sociological law that keeps the religious side from writing up such summaries.
Eliezer makes a mistake (a major one in fact) with regard to his understanding of Jewish law being passed down over the generations. The mistake he makes is quite a common one among those people who have not studied the history of Orthodox Jewish philosophy in depth. Indeed, I have met many Rabbis with 40 or 50 years of experience with little or no knowledge of this topic, so this is not an attack on Eliezer. While I am only an Orthodox Rabbinical and Talmudic law student (I hope to be ordained within a couple of years), besides for being a college undergrad, my personal interests are in the history of Orthodox Jewish philosophy and reconciling Jewish law and philosophy and modern science and philosophy. The correct understanding (in a simple form) as Orthodox Jews like myself believe, of the concept, is that the Pentateuch was given by God to Moses at Mt. Sinai. As many of the laws in the Pentateuch are vague, God gave Moses the “Torah SheBa’al Peh,” the “oral law” of these explanations of what was to be included in these laws. The Rabbis in later generations lost the reasons for many of these oral laws, besides for that according to Jewish legend, many of these laws were never given reasons in the first place. As such, these later Rabbis (of roughly 1500-2200 years ago) often tried to re-formulate the reasons based on their knowledge. This does not mean that the laws are based on incorrect assumptions, but merely that which we think is the reason for a law, is not actually the reason. Eliezer mixes this up with the concept that later Rabbis do not argue in law on earlier Rabbis. That concept is based on a completely different line of reasoning. This new line of reasoning is based on a Jewish belief that to properly understand Jewish law and apply it requires not just intelligence and the ability to think logically, but also on piety and trust in God. In other words, trust in God and piety also play a role in legal decisions. This combined with another Jewish belief that as the generations go on, there is a general decrease in piety and trust in God across the board, means that those making legal decisions now have less of a key component of the decision making, while we have no way of knowing who has the greater intelligence and reasoning skills. As such, later Rabbis do not argue on earlier Rabbis. Additionally, this concept that later rabbis do not argue on earlier Rabbbis is not set in stone and there are a number of exceptions to this rule. Of course one can merely argue that my background makes me biased toward an irrational belief, and perhaps that is true.
Great discussion! Regarding majoritarianism and markets, they are both specific judgment aggregation mechanisms with specific domains of application. We need a general theory of judgment aggregation but I don’t know if there are any under development.
In a purely speculative market (i.e. no consumption, just looking to maximize return) prices reflect majoritarian averages, weighted by endowment. Of course endowments change over time based on how good or lucky an investor is, so there is some intrinsic reputation effect. Also, investors can go bankrupt, which is an extreme reputation effect. If investors reproduce you can get a pretty “smart” system, but I’m sure it has systematic limitations—the need to understand those limitations is a good example of why we need a general theory of judgment aggregation.
I’d like to see an iterated jelly bean guessing game, with the individual guesses weighted by previous accuracy of each individual. I bet the results would quickly get better than just a flat average. Note that (unlike economies) there’s no fixed quantity of “weight” here. Conserved exchanges are not a necessary part of this kind of aggregation.
On the other hand if you let individuals see each other’s guesses, I bet accuracy would get worse. (This is more similar to markets.) The problem is that there’s be herding effects, which are individually rational (for guessers trying to maximize their score) but which on average reduce the overall quality of judgment. This is an intrinsic problem with markets. Maybe we should see this as an example of Eliezer’s point in another post about marginal zero-sum competition.
The discussion about the “dissipation” of knowledge from generation to generation (or of piety and trust in God, as ZH says) reminds me of Elizabeth Eisenstein’s history of the transition to printing. Manual copying (on average) reduces the accuracy of manuscripts. Printing (on average) increases the accuracy, because printers can keep the type made up into pages, and can fix errors as they are found. Thus a type-set manuscript becomes a (more or less reliable) nexus for the accumulation of increasingly reliable judgments.
Eisenstein’s account has been questioned, but as far as I’ve seen, the issues that have been raised really don’t undercut her basic point.
Of course digital reproduction pushes this a lot further. (Cue the usual story about self-correcting web processes.) But I don’t know of any really thorough analysis of the dynamics of error in different communication media.
The Judeo-Christian world is full of so many contrasting views that it really amazes me sometimes.
Take Mormonism, for example. It’s authoritarian structure is perhaps even more strict (and certainly more hierarchical) than what you’ve described in Orthodox Judaism, yet it has this one core doctrine that is viewed as heretical in most of the rest of the Christian world: the idea that man is destined to become like God, literally. In fact, the idea that God himself was once a lowly man, but exerted enough “Tsuyoku Naritai!” to overcome his own sins and rise. As such, Mormons believe that the saying by Jesus to “Be ye therefore perfect, even as your Father which is in heaven is perfect,” is a literal commandment.
Not saying that religion should inform our views here, simply that the Mormon perspective seems to align with the overall direction of this post, and that it is somewhat striking that such a view can arise from the same common religious ancestry.
That’s a truly bizarre belief. If god is perfect and benevolent, why didn’t he give clear laws in the first place, instead of forcing humans to run in circles trying to interpret them?
“Torah loses knowledge in every generation. Science gains knowledge with every generation. No matter where they started out, sooner or later science must surpass Torah.”
That’s not strictly true, of course. If the difference in knowledge shrinks more slowly for each generation, then the Torah could conceivably still be the #1 source of knowledge for eternity.
It’s a good job young Eliezer hadn’t done any courses in Analysis.
I think tsuyoku naritai actually works as an effective motto for transhumanism as well:
“I am flawed, but I will overcome my flaws. To each of my failings, I say tsuyoku naritai. To each flaw I have and to each flaw I will ever develop, I say tsuyoku naritai. To the flaws that are part of being human, I say tsuyoku naritai. If that means I must abandon what it means to be merely human, I say tsuyoku naritai. As long as I am imperfect, I will continue to say tsuyoku naritai!”
That’s true, but perhaps a little unfair. I always understood the fact that everyone confesses to everything as a simple necessity to anonymise the guilty. Under a system where people only admit to things they have actually done, if there’s been one murder in the community this year, unsolved, then when the ‘We have murdered’ line comes, everyone is bound to be listening very carefully.
As I was taught, that’s also a little unfair, or at least oversimplified. That everyone confesses to everything is not just primitive anonymisation, it’s a declaration of communal responsibility. It’s supposed to be deliberate encouragement to take responsibility for the actions of your community as a whole, not just your own.
I’ve always wondered what “communal responsibility” really means. It’s one thing to ask people to encourage their friends to act morally, or to go on the record now and then as opposing a perceived injustice. But your community could be flawed despite your best efforts to fix it—it doesn’t really seem fair to expect someone with finite resources to answer for a hundred other people’s behavior.
I’m not sure how fairness enters into it.
If there are N of us in a leaky rowboat, we have a communal responsibility to bail the water out. If there are N of us in an airtight container that only holds enough air to sustain (N-1) lives before the container opens, we have a communal responsibility to decide how many and which of us dies. If there are N of us and we have 2N yummy pies, we have a communal responsibility to distribute the pies in some fashion.
A communal responsibility is just like an individual responsibility, except it applies to a group.
Is that fair? Beats me. Mostly I don’t think the question is well-formed.
One possibility:
If you’re a member of a group, and the way non-members will treat you individually is largely informed by their perception (stereotype) of that group, then you want that group to have a good reputation rather than a bad one. If anything that a group member does reflects on the group, then each person should (in their own best interests) do things that improve rather than worsen that reputation.
A moral symmetry (like the Prisoner’s Dilemma or Stag Hunt games) exists, because everyone else in your group is in the same situation wrt you, that you are wrt them. If you do something that benefits you personally but harms the group’s reputation, everyone else in the group suffers; the same is true if another group member does so.
This sort of reasoning is often applied to (and by) minority groups who suffer from others’ stereotyping.
It is also a favorite of concern trolls.
I know exactly what you are talking about man.
I’m on a quest to claim absolute victory on every front too.
tsuyoku naritai!
how do you pronounce this?
su-yo-coo nar-ee-tie?
I’m going to make it my warcry whenever I need to energize myself.
This is close.
I now have a custom bracelet that says “Tsuyoku Naritai” on one side, and “Kaizen” on the other. I’m using it in place of a Sikh Kara, or a WWJD bracelet.
What does ‘Kaizen’ mean?
Luke
“Improvement” is probably the literal translation, but it’s used to mean the “Japanese business philosophy of continuous improvement”, the idea of getting better by continuously making many small steps.
Wiktionary:
Google translate works for the romanized word too (it will give you the kanji automatically), but only when “translating from Japanese”; it won’t detect romanized Japanese by default.
Wiktionary:
Google translate works for this romanized term too (it gives you the word in its correct orthography automatically).
In Roman characters or in Kanji? I’d be interested in an aesthetically pleasing way to write it.
強くなりたい is how it is written in Japanese.
Is there a way that uses fewer characters? (Presumably more complex ones). Apologies for my lack of knowledge.
No. These are verb endings and can’t be written as Kanji. (Well, you could use 強く成りたい, but that’s weird and doesn’t buy you anything.)
Edit: Maybe use a different expression? 一生懸命 (i’shou kenmei) is closely related and denser.
Thanks. :)
For context, I had the idea of making an artistic representation of the phrase as a symbolic reminder (partly inspired by Jeffreyssai’s symbols ). So ideally I’d use as dense a representation as possible.
You could always just go with 強 - it just represents “strength” (in Chinese / Japanese) but if you’re looking for a symbolic reminder it should be sufficient, and a single Kanji is often used for symbolic purposes.
I have it in roman characters. Kanji would be more pleasing, but harder to have created.
A practicing Jewish friend of mine challenged me on the anecdote about worms in apples, and I couldn’t Google an independent reference. Can anyone help me verify it?
Chabad has a total (a late one) ban on eating figs http://www.shturem.net/index.php?section=news&id=12572 due to the fact that its fruit is frequented by small worms which can not be distinguished from the fuit internals . The fact became known from biology and agriculture studies.
It’s easy to find thousands of discussions online for the somewhat different case of “worms” in fish. This is a good one, like many good ones, it’s not exactly in English; I am not sure how clear the terms are from context, but your friend should know them.
While I understand the point you’re trying to make—and agree with it—I think your Yom Kippur analogy is flawed. The idea behind the litany is that we’re praying for forgiveness for the sins of all of mankind. Even if you, personally, have not stolen, there’s someone in the world who has, and you’re praying for him too. That’s why its worded in the plural (“we have stolen,” as opposed to “I have stolen”).
Just sayin’.
As far as I learned, it is community-wide and not humanity-wide. Judaism is rather a tribal religion in this matter.
Regardless, there is a good reason for the plural pronoun.
“Torah loses knowledge in every generation. Science gains knowledge with every generation. No matter where they started out, sooner or later science must surpass Torah.”
Mazel Tov!
Encountering this post has made me a better person in so many ways. Thank you, Eliezer.
I made a video compilation of Japanese songs that include the words “Tsuyoku naritai”.
https://www.youtube.com/watch?v=CtcXiT6An-U
I wasn’t really convinced that this concept was really present in Japanese culture before but I suppose I am, now.
That demonstrates that Japanese culture has the phrase. Not that Japanese culture has the phrase with the same meaning as Eliezer uses.
And even if Japanese culture has it, there’s a difference between having it as a fictional thing and having it as a concept commonly applied to actual people.
Also, in this context, remember that fictional scenarios are often set up to have individuals drastically influence the result where real life scenarios do not. People like reading about Voldemort defeated by Harry Potter, not by 200 wizards doing routine policing misions that are thorough enough that they happen to find all the horcruxes, followed by massive military backup for the squad of identically trained men raiding his compound. That’s why fictional characters often have something like tsuyoku naritai; it doesn’t carry over to the real world.
By the way:
Obviously Eliezer was not familiar with the concept “asymptote”.
When he was a kid at a religious elementary school.
When he was an adult who posted that, and clearly did not mean “this is some stupid thing I thought as a kid because I didn’t know better”.
Actually he says that he wasn’t a proper atheist at the time which basically means that he didn’t really think clearly about the issue.
I don’t think the thought itself is stupid. It just doesn’t fit the complexity of the situation.
For what it’s worth, I would enjoy reading about a squad of trained wizards raiding Voldemort’s compound ^^.
I’m very biased toward your ideas. My practical approach to unbias is to change the environment I’m exposed to. I think I know what to do about it but it isn’t easy.
I can’t find any reference for the saying at the beginning. Can someone help?
Maybe this?
https://en.wikipedia.org/wiki/Yeridat_ha-dorot
I just googled “judaism saying generation”.
Please correct me if I’m wrong, but even in Judaism the (widely accepted) lesson is to improve as an individual, even if the overall trend is a decline. In another phrasing—the individual should try to diminish the generational degradation of virtue as much as possible. And the penance comes inevitably because we will inevitably sin SOME, because we’re imperfect humans. Even so, a very real danger remains of taking this penance as a goal in its own right, and forgetting that we primarily need to improve. All that said, I enthusiastically committed to “Tsuyoku Naritai”, and to be as Science rather than as Torah :)
That saying is interesting, however, a different interpretation comes to my mind.
The previous generation is like angels because they wield economic power. Hence, their will is not short of The Great Will.
The next generation is like donkeys because they carry the burden. After all, all the waste generated as the previous generation wields angelic rights is for the next generation to carry and dispose.