possibly, but is that not basically a No True Rationalist trick? I do not see a way for us to truly check that, unless we capture LW rationalists one by one and test them, but even then, what is preventing you from claiming: “eh, maybe this particular person is not a Real Rationalist but a Nerdy Hollywood Rationalist, but the others are the real deal,” ad nauseam?
I definitely agree that people who consider themselves Rationalists believe themselves to be Actual Rationalists not Hollywood Rationalists. This of course leads us to the much analyzed question of “why aren’t Rationalists winning?” The answers I see is that either Rationality does not lead to Winning, Or the Rationalists aren’t Actual Rationalists (yet, or at all, or at least not sufficiently).
A major case in point is that Rationalists mostly failed to convince the world about the threat posed by unrestricted AI. This means that either Rationalists are wrong about the AI threat, or bad at convincing. The second option is more likely I think, and I wager the reason Rationalists have a hard time convincing the general public is not because the logic of the argument is faulty, but because the delivery is based on clunky rhetoric and shows no attempts at well engineered charisma.
Going Durden
The Stevia-drink issue is likely psychological in nature not blood-sugar related. You would have to be tricked by a third party to drink a stevia soda unknowingly, and inversely, be tricked into drinking sugary soda while thinking it is stevia based; then compare the results.
In my own diet journey I noticed similar trend: knowingly eating or drinking substitutes of things I like makes my subconscious throw a tantrum and demand the real thing anyway. I think it is more about self-resentment over being tricked, than the actual taste or content.
Just giving up the thing completely, both the real thing and substitute hurts more at first, but makes it easier to form a habit (for example, replacing soda not with stevia soda but with plain water). Some minds find purposeful “asceticism” of a diet easier than “pretend abundance” of the replacement products.
some counter-arguments, in no particular order of importance:
Verbal communication is quite often more succinct, because it is easier to exhaust the vocal medium, and you can see in real time your conversationists getting bored with your rambling.
Verbal communication allows far more nuance carried with tone, body language, and social situation, thus often delivers the message most clearly. I find it most useful when discussing Ethics: everyone is a clinical utilitarian when typing, but far more humanistic when they see the other person’s facial reaction to your words.
Rhetoric and charisma do not carry well over text. Most Rationalists consider it beneficial, right until the point where they need to explain something, or convince non-Rationalists and completely lack the tools to do so. Avoiding the use of verbal rhetoric and not training your in-person charisma is the surefire way to become very unconvincing to the general audience: case in point, every attempt by explain AI Risk to “muggles” by somewhat introvert and dry-talking Rationalists.
Related to point 3: conversational charisma is the main tool used by human males to woo women. By not practicing conversational charisma, Rationalists ensure they will breed themselves out of existence.
Most child-rearing and education is oral communication. Without practicing it, the Rationalist will not make a good parent or a teacher, and thus, from civilizational perspective, had squandered his rationality.
Rubberducking: saying things out-loud quite often leads to epiphanies, especially negative ones (“wow, my cherished idea sounds really dumb when I say it out, loud.”). Writing down, and then reading your own ideas often leads to an emotional feedback loop in which you reinforce your own conviction rather than nit-picking your own idea. This leads to...
Oral communication avoids the risk of Rabbit-Holes. When writing, uninterrupted, it is easy to accidentally pick a logical mistake as the crux of your whole argument, and waste hours exploring it. In conversation, your partner/opponent can snip that in the bud.
Op-Sec. Oral conversation is far less likely to get you in trouble for the things you say, unless you are being recorded. Meanwhile, a text based conversation, especially on a social platform is a Sword of Damocles always hanging over your head. Say the wrong thing, and at worst a dozen people will consider you an ass. Write and post the wrong thing, and you might, decades from now, lose your job, your social standing, or even your life. An innocent comment today might make people cancel you in 2040, or a vengeful Basilisk mulching you in 2045.
There is also the fact that we already are, effectively, controlling our own genetic pressures through culture and civilisation. Our culture largely influences our partner choice, and thus, breeding. Our medical sciences, agriculture, and urbanization takes pressure off survival. So the eugenic/dysgenic/paragenic process is in effect anyway, just… stupidly.
Some simple examples:
- agriculture pushes us to be lactose tolerant and carbohydrate dependent
- art and media dictates our sexual choices and mate choice
- education creates pressure for intelligence, but a very specific kind of one.
- in the long run, contraception methods might pressure a further evolution of our reproductive systems (ex: sooner or later, women with extremely unlikely mutations that allow them to “beat” the contraceptive pill will outbreed those who do not share such mutation. )
Im particularly interested in how our sexual culture effectively works as a secondary “blind goddess of eguenics”. For likely the first time since the Neolithic (or possibly since forever), we have reached an age in which women are free to chose their male partners based on physical attraction and mental kinship, not social pressure and need for survival. Assuming this trend continues, and we do not relapse into social conservatism, I expect a rather sudden (by evolutionary standards) shift in male choice, and thus sexual dimorphism.
Atop of that, we the rise of affordable In Vitro fertilization, we effectively are using conscious Eugenics, one specifically geared towards the needs of women and couples, rather than society at large. We are entering an age when the human male is not strictly necessary for breeding, or his offspring’s survival, and thus, with the exception of the rare super-specimens who are sperm donors, men no longer fall under any evolutionary pressure, and do not really need to exist.
The decades between the moment when in-vitro becomes the norm, and the moment when artificial wombs become the norm, will be very interesting indeed.
Such communities are then easily pulverized by communities who value strong groupthink and appeal to authority, and thus are easier whipped into frenzy.
I mostly agree with you, though I noticed if a job is mostly made of constantly changing tasks that are new and dissimilar to previous tasks, there is some kind of efficiency problem up the pipeline. Its the old Janitor Problem in a different guise; a janitor at a building needs to perform a thousand small dissimilar tasks, inefficiently and often in impractical order, because the building itself was inefficiently designed. Hence why we still haven’t found a way to automate a janitor, because for that we would need to redesign the very concept of a “building”, and for that we would need to optimize how we build infrastructure, and for that we would have to redesign our cities from scratch… etc, until you find out we would need to build an entire new civilization from ground up to, just to replace one janitor with a robot.
it still hints at a gross inefficiency in the system, just one not easily fixed.
There are also some mental issues among people who know about AI safety concerns, but are not researchers themselves and not even remotely capable of helping or contributing in a meaningful way.
I for one, learned about the severity of the AI threat only after my second child was born. Given the rather gloom predictions for the future, Im concerned for their safety, but there does not seem anything I can do to ensure they would be ok once the Singularity hits. It feels like I brought my kids to life just in time for the apocalypse to hit them when they are still young adults at best, and irrationally, I cannot stop thinking that Im thus responsible for their future suffering.
I noticed I also recall conversations, podcasts etc better if I was doing some kind of a manual task at the same time (like woodcarving, or just doing the dishes). My interpretation is that focusing on a conversation while immobile is under-stimulating, and thus causes the mind to wander. If one is walking, or doing something physical, its enough physical stimulation to let the mind focus on the conversation in a “railroaded” fashion, without self-distraction.
Even deeper: it feels great to match your walking/activity pace to the emotional message of the conversation. I suppose it triggers the same reaction as ASMR. I suppose its because it lets us “act out” our emotional reaction to the words, without inappropriate gesticulation etc.
Further weak evidence that walking helps with conversational cognition:
- plenty of people, without any cultural connection between them, pick up the habit of pacing around when on the phone.
- it was a well known technique among ancient Greek philosophers and scholars to just take their students on a walk, or even a longer trip while discussing abstract subjects. Apparently it worked very well and was done this way for centuries.
- humans evolved to be semi-nomadic persistence hunters. Walking around all day is our natural state that we evolved for, sitting down for hours is not.
OTOH, I have a hunch that the kinds of jobs that select against “speed run gamer” mentality are more likely to be inefficient, or even outright bullshit jobs. In essence, speed-running is optimization, and jobs that cannot handle an optimizer are likely to either have error in the process, or error in the goal-choice, or possibly both.
The admittedly small sized sample of examples where a workplace that resisted could not handle optimization that I witnessed were because the “work” was a cover for some nefarious shenanigans, build for inefficiency for political reasons, or created for status games instead of useful work/profit.
Aside from the obvious reasons already mentioned, I wonder if the reason for the regress was not partially related to compound inbreeding. In most cases when technological regress happens, it tends to coincide with a genetic bottleneck as well, which I have a hunch would make the problems worse.
Its in the ballpark of 50k. I support a family of 4 on 10k a year, round-ish. I can save about 1k-2k a year, If we live on a very, very tight budget. It would thus take me a century to pay for cryonics just for my immediate family, if the prices do not fall quickly enough.
In Rand’s defense, she does define the terms “altruism” and “selfishness” i her works, at length, from every possible angle, at nauseam. Its impossible to read more than one page of her work and still confuse her definitions for standard ones.
The confusion usually comes up through a game of telephone, when people opposed to Objectivism comment on things written by fans of Rand, without ever actually reading the source material.
Every human being is selfish, but most are also altruistic some of the time
What, in your estimation, would be a difference between actual altruism, and “altruism” done for the sake of selfish emotional fuzzies?
Lets say I pass a beggar on the street. If I give him a dollar because he needs it, its altruism. If I give him a dollar because I want to feel like Im a Good, Charitable Guy, and genuinely enjoy his thanks, then its selfishness.
About the only true altruism I can think of that is not essentially a form of egoism, is when you absolutely HATE the fact that you act charitable, and get zero pleasure from it, not even masochistically. If you so much as get a single second of warm fuzz in your heart from your charitable act, thats just roundabout selfishness. If you pay the beggar 1$ and then feel emotionally better, he is essentially your low-budget therapist, and you just performed a completely selfish act of capitalist exchange.
I truly hope the cost of cryo falls rapidly in the next few years. A back-of-the-napkin calculation I did shows that if I wanted to pay forward for an option to cryopreserve my children (should they ever need it) I would have to save money for over 20 years, skipping on every life luxury for them and myself. It would be a bizarre life in which we would live like ascetic monks who spend most of their lives preparing to die and achieve Afterlife. Uncannily like religion.
If, aside from paying for cryo for my kids, I also wanted to pay for my own, my SO’s, and my parents, my brother etc, I would need to be effectively immortal just to put in enough work-hours.
Cryo might end up being the absolute pinnacle of elitist technology, because if you are not rich and Western enough, you are unlikely to ever afford it, and thus, destined to not only die, but watch your loved ones die as well while average Middle Class people from US or western EU would just chuck their sick loved ones into a freezer with a near certainty of their eventual survival and health.
The religions had it all wrong. In order to achieve Immortality in the After-life, you do not need to be good, or without sin, or pious, you just need to be able so save around 30-80k. If you can’t, well, sucks to be you. Should have thought of it before you decided to be born poor.
One thing I don’t see explored enough, and which could possibly bridge the gap between Rationality and Winning, is Rationality for Dummies.
Rationalist community is oversaturated with academic nerds, borderline geniuses, actual geniuses, and STEM people who’s intellectual level and knowledge base is borderline transhuman.
In order for Rationality and Winning to be reconciled with minimum loss, we need a bare-bones, simplified, kindergarten-level Rationality lessons based on the simplest, most relatable real life examples. We need Rationality for Dummies. We need Explain Methods Like Im Five, that would actually work for actual 5-year olds.
True, Objective Rationality Methods should be applicable whether you are an AI researcher with a phd, or someone to young/stupid to tie their own shoes. Sufficiently advanced knowledge and IQ can just brute-force winning solutions despite irrationality. it would be more enlightening if we equipped a child/village idiot with simple Methods and judge their successes on this metric alone, while they lack intellectual capacity or theoretical knowledge, and thus need to achieve winning by a step-by-step application of the Methods, rather than jumps of intuition resulting from unconscious knowledge and cranial processing power.
Only once we have solid Methods of Rationality that we can teach to kids from toddler-age, and expand on them until they are Rational Adults, then we can say for certain which Rationalist ideas lead to Winning and which do not.
One of the main ways I managed to instill good habits in myself is to both use optimal paths to good habits, and closing optimal paths to sub-optimal habits. The trick is to make a good habit easier than it is annoying, and a bad habit more annoying than it is preferable.
Examples:
Hydration—I simply place a 2l water bottle by the apartment door every evening. It becomes impossible for me to leave the house without picking it up, and once it is in my hand, Im so much more likely to drink from it and take it with me than forget.Exercise: I bought dumbbells to work out with, but consciously made no place to put them. I just place them on my gaming chair, so it becomes impossible to use the PC without lifting the dumbbells. But the moment they are literally in my hands, it is easier to just pump a few curls than not.
Exercise/commute: I’m trying to unlearn driving everywhere, and bike whenever I can. I just place my car keys in my bike’s frame pouch. This way I cannot leave the house without touching my bike, and once I do, its easier to just hop on it and ride away.
Diet: I always struggled with weight, and the one “simple trick” that actually worked for me was brushing my teeth ASAP after dinner. Since my teeth are already brushed, and it would be annoying to do so again, Im much less likely to snack after dinner. If the urge to snack is really strong, I just use some mouthwash, which not only makes me even more disinclined to soil my super-clean teeth, but no snacks taste good when my mouth is super minty/mentholly.
Waking early: the path to a sub-optimal habit is to hit snooze on the alarm and go back to sleep. Breaking the habit was as easy as placing the alarm clock in the bathroom, so I would have to walk across the entire house to turn it off, and once I do, Im already where I need to be to brush my teeth and shave, so might as well do so.
They reason why these are working is that all those habits are relatively weak, and a small tweak to how annoying would they be, means all the difference. Its basically weaponizing my own laziness/procrastination against itself. The goal is to make myself spend extra energy walking around and looking for things needed for my bad habits, and the things needed for the good habits to be always in my path.
My take on some of the items on this list:
Lack of Intelligence: Very likely
Slow take-off AI: Very Likely
Self-Supervised Learning AI: Likely
Bounded Intelligence AI: Likely
Far far away AI: Likely
Personal Assistant AI: close to 100% certain.
Oracle AI: Likely
Sandboxed Virtual World AI: likely
The Age of Em: Borderline Certain
Multipolar Cohabition: borderline certain
Neuralink AI: borderline certain
Human Simulation AI: likely
Virtual zoo-keeper AI: likely
Coherent Extrapolated Volition AI: likely
Partly aligned AI: Very likely
Transparent Corrigible AI: Borderline certain.
In total, I think the most probable scenario is a very, very slow take-off, not a Singularity, because AGI would be hampered by Lack of Intelligence, slowed down by countless corrections, sandboxing and ubiquity of LAI. In effect, by the time we have something approaching true AGI, we would long be a culture of cyborgs and LAIs, and the arrival of AGI will be less of a Singularity, but a fuzzy pinnacle of a long, hard, bumpy and mostly uneventful process.
In fact, I would claim that we will never be at a point where we can agree: “yep, AGI is finally achieved.” I rather envision us tinkering with AI, making in painstakingly more powerful and efficient, with tiny incremental steps, until we are content that it is “eh, this Artificial Intelligence is General enough, I guess.”
In my view, the true danger does not come from achieving AGI and it turning on us, but rather achieving stupid, buggy yet powerful LAI, giving it too much access, and having it do something that triggers a global catastrophe by accident, not out of conscious malice.
Its less “Superhuman Intelligence got access to the nuclear codes and decided to wipe us out” but, “Dumb as a brick LAI got access to the nuclear codes and wiped us out due to a simple coding error”.
One problem I see with your insect alien example, which also, in a much greater way, influences human attractiveness, is that there are not just four, or five, or a dozen of physical attractiveness factors, but hundreds of them. And each of these factors influences other factors in different ways, for example:
height on a man is considered attractive
low body fat on a man is considered attractive, but;
a combination of too much height and too little body fat would be unattractive.
My take is there are hundreds, even thousands of traits that fall under “Flawlessness” but they play very weirdly against each other, and thus Appeal is born; a personal subconscious opinion on what sets of traits one likes most.
What is also missing from your analysis, is Beauty-Appeal Vs Sex-Appeal. Some traits trigger our aesthetic appreciation, and some trigger our raw sexual appetite, and not only are these not the same traits, but sometimes opposite ones.
I would define Sex-Appeal as a set of traits, physical and behavioral, that make the person seem:relatively easy to seduce (for me), also known as DTF (down to fuck)
suggesting they would be good at sex
suggesting their body would feel nice to touch
vaguely related to strong Secondary Sexual Characteristics
Meanwhile, Beauty-Appeal are sets of purely aesthetic Flawlessness traits, that do not correspond to the above points at all, but show symmetry, golden ratio, aesthetically striking color palette etc. The make a person a perfect model, someone you would love to take pictures of, paint or draw, rather than get raunchy with.
I would even take it further, many of the Beauty-Appeal traits take away from Sex-Appeal, because some of them are signifiers of innocence, youth, or vaguely stand-offish perfection, that make the person seem like they would not be DTF. We subconsciously disengage from thoughts about having sex with such a person, regardless whether or not these traits truly signify their DTF.
Some examples:
Melodic, high female voice: beautyraspy, low pitched female voice: sexy
Flawless skin: beautyTattoos and “cool” scars: sexy
hairless male chest: beautyhirsute male chest: sexy
perfectly sized medium breasts: beautyoversized breast: sexy
what Im getting at, is that while the evidence for oldest agriculture is from around 12k-10k, this is not the same as saying that your particular ancestors come from a line that used agriculture for solid 10k years straight (unless you are from very specific Anatolian or Iraq genetic lines).
It could easily be the case that your ancestors had been eating grain and dairy for 500 generations, or maybe just 10 generations or less.
One example of what Im talking about is lactose tolerance which allows one to consume dairy. It is a mutation that is only roughly 8k years old, and thats only if you are of Anatolian/Turkish ancestry.
Another would be protein madness, which rarely happens among Sub-Polar people, but affects Europeans who moved North.
Similarly, our genetic predisposition towards certain reactions to gluten, high-protein diet, high fructose diet, even alcohol vary wildly.
In most cases, when we think of “modern” diet and lifestyle, we are basically thinking of the industrialized, grain and dairy Anglo-Saxon diet and a life of small caloric surplus over a relatively modest caloric expenditure. Which affects you different if you indeed are of Anglo-Saxon ancestry, and your ancestors had been eating cheese and bread for at least 6k years, while slowly reducing the amount of labor needed to create it.
Its going to hit you differently if your ancestors were Sub-Polar peoples who subsisted on high-fat/zero carb diet, or came from a tropical jungle where they subsisted on high-sugar fruit, low fat meat and minimum labor to procure it.
Without AGI, people keep dying at historical rates (following US actuarial tables)
Im not entirely convinced of this being the case. There are several possible pathways towards life extension including, but not limited to the use of CRISPR, stem cells, and most importantly finding a way to curb free radicals, which seem the be the main culprits of just about every aging process. It is possible that we will “bridge” towards radical life extension long before the arrival of AGI.