For those who may not have seen and would like to make a prediction (on Metaculus; current uniform median community prediction is 15%)
Will WHO declare H5N1 a Public Health Emergency of International Concern before 2024?
For those who may not have seen and would like to make a prediction (on Metaculus; current uniform median community prediction is 15%)
Will WHO declare H5N1 a Public Health Emergency of International Concern before 2024?
So why should you dress nice, even given this challenge? Because dressing nice makes your vibes better and people treat you better and are more willing to accommodate your requests.
This is a compelling argument to me, as someone who also had a fuzzy belief that “dressing nicely was a type of bullshit signaling game” (though perhaps with less conviction than you had).
It was around the time (several years ago) that I saw someone dressed like me (pants tucked into the socks and shirt tucked into the pants) that I had the realization that I would probably benefit from dressing better.
This realization was compelling enough to stoke me into initial action, which took the form of testing out new clothing that had passed the rough vibe check of my family and friends, who dress well and seem to care a decent amount about how they dress, but was not strong enough to keep me trying out new clothing.
I found that all the clothing I was trying on was too physically uncomfortable for me. There was also a minor psychological component as a well that I can only describe as a feeling of mismatch between my self-perception and the expected perception people would have of the clothed object before me in the mirror.
As a result of these “failed” experiments, I opted to wear flannels and make sure that the color of my socks matched the color of my pants; to me, this intervention was enough to get me above a vague status threshold and did not require much effort. With very few exceptions, I have not deviated from this dress code.
I cannot recall if I observed a difference in how I was treated after following change, which occurred several years ago.
Thank you writing this post Gordon. After reading and bookmarking it, I think I am marginally more likely to again attempt to dress better in the near-term future.
Does anyone here have any granular takes what GPT-4′s multimodality might mean for the public’s adoption of LLMs and perception of AI development? Additionally, does anyone have any forecasts (1) for when this year (if at all) OpenAI will permit image output and (2) for when a GPT model will have video input & output capabilities?
...GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs)...
it very much depends on where the user came from
Can you provide any further detail here, i.e. be more specific on origin-stratified-retention rates? (I would appreciate this, even if this might require some additional effort searching)
Summary: Introduction (I introduce this shortform series), Year 0 for Human History (I discuss when years for humanity should begin to be counted)
This shortform post marks the beginning of me trying to share on LessWrong some of the thoughts and notes I generate each day.
I suspect that every “thoughts and notes” shortform I write will contain a brief summary of its content at the start, and there will very likely be days where I post multiple shortforms of this nature, hence the (X) after the date.
As for the year in the date on these posts, I want to use something other than the Gregorian calendar’s current year. Moreover, I want to better capture the time of origin for a key moment in human history, such as the origin of agriculture, writing, or permanent settlement. The rest of this shortform consists of some notes on this topic.
In 2019, after I watched the Kurzgesagt—In a Nutshell video A New History for Humanity – The Human Era (2016), I opted to change the year in the date in my journal entries from 2019 to 12019. This Kurzgesagt video describes the idea that different choices for “year 0” for the “human era” result in different perceptions of human history.
Regarding this claim, I generally agree. If “year 0” for humanity began when the first anatomically modern humans appeared, then the year would be ~202022, and if “year 0″ began when the first nuclear weapon was deployed, the “human era” would be only 77 years old. These scenarios seem to strongly allocate my attention in different areas, with the former placing my attention on the thickness and mysteries of what we today call “prehistory” and the latter focusing my attention on the rapid progress and dangers that are characteristic of modernity.
The Kurzgesagt video explores the idea of setting “year 0” to 12000 years ago (the 10th millennium BC), which is apparently around the time the first large scale human construction project seems to have taken place. Having 12000 years ago be “year 0″ means that, when the current year is being considered, more attention would likely be allocated to the emergence of widespread agriculture, writing, and intensive construction of settlements and cities than is currently allocated.
Some notes for the preceding paragraph:
Agriculture seems to have started roughly 12k years ago (see History of agriculture).
Agriculture began independently in different parts of the globe, and included a diverse range of taxa. At least eleven separate regions of the Old and New World were involved as independent centers of origin. The development of agriculture about 12,000 years ago changed the way humans lived. They switched from nomadic hunter-gatherer lifestyles to permanent settlements and farming.[1]
Wild grains were collected and eaten from at least 105,000 years ago.[2] However, domestication did not occur until much later. The earliest evidence of small-scale cultivation of edible grasses is from around 21,000 BC with the Ohalo II people on the shores of the Sea of Galilee.
Following the emergence of agriculture, construction and architectural practices became more complex, leading to larger projects and settlements (see History of construction and Neolithic architecture)
The Neolithic, also known as the New Stone Age, was a time period roughly from 9000 BC to 5000 BC named because it was the last period of the age before woodworking began.
Neolithic architecture refers to structures encompassing housing and shelter from approximately 10,000 to 2,000 BC, the Neolithic period.
Architectural advances are an important part of the Neolithic period (10,000-2000 BC), during which some of the major innovations of human history occurred. The domestication of plants and animals, for example, led to both new economics and a new relationship between people and the world, an increase in community size and permanence, a massive development of material culture, and new social and ritual solutions to enable people to live together in these communities.
The oldest known surviving manmade building is Göbekli Tepe, which was make between 12k to 10k years ago (this is the structure alluded to in the Kurzgesagt video I mentioned earlier).
Located in southern Turkey. The tell includes two phases of use, believed to be of a social or ritual nature by site discoverer and excavator Klaus Schmidt, dating back to the 10th–8th millennium BC. The structure is 300 m in diameter and 15 m high.
Writing systems are believed to have emerged independently of each other, with the oldest instance of writing being in Mesopotamia potentially as early as 3.4k BCE.
However, the discovery of the scripts of ancient Mesoamerica, far away from Middle Eastern sources, proved that writing had been invented more than once. Scholars now recognize that writing may have independently developed in at least four ancient civilizations: Mesopotamia (between 3400 and 3100 BCE), Egypt (around 3250 BCE),[4][5][2] China (1200 BCE),[6] and lowland areas of Southern Mexico and Guatemala (by 500 BCE).[7]
Given that these historical developments I have outlined above seem very valuable to consider in context of modern civilizational progress, I’ve decided to take “year 0” to be 12000 years ago. The official name for this calendar system is actually the Holocene calendar, which was developed by Cesare Emiliani in 1993. The current year in the Holocene calendar is 12022 HE. Below are two comments on the benefits and accuracy, respectively, of the Holocene calendar’s Wikipedia page:
Human Era proponents claim that it makes for easier geological, archaeological, dendrochronological, anthropological and historical dating, as well as that it bases its epoch on an event more universally relevant than the birth of Jesus. All key dates in human history can then be listed using a simple increasing date scale with smaller dates always occurring before larger dates. Another gain is that the Holocene Era starts before the other calendar eras, so it could be useful for the comparison and conversion of dates from different calendars.
When Emiliani discussed the calendar in a follow-up article in 1994, he mentioned that there was no agreement on the date of the start of the Holocene epoch, with estimates at the time ranging between 12,700 and 10,970 years BP.[5] Since then, scientists have improved their understanding of the Holocene on the evidence of ice cores and can now more accurately date its beginning. A consensus view was formally adopted by the IUGS in 2013, placing its start at 11,700 years before 2000 (9701 BC), about 300 years more recent than the epoch of the Holocene calendar.[6]
So, why is the year on this shortform 0012022 and not just 12022? There are two reasons for this. The first is that I would like for myself to think more deeply and frequently about my own future and about humanity’s long-term future.
An organization developed around the idea of thinking about and safeguarding humanity’s future is the Long Now Foundation (LNF), which most LWers have likely heard of. This is its description:
The Long Now Foundation
is a nonprofit established in 01996 to foster long-term thinking.
Our work encourages imagination at the timescale of civilization — the next and last 10,000 years —
a timespan we call the long now.
The LNF’s foundation year consists of 1996 with a 0 appended to the front, indicating that the timeframe under consideration − 10k years—is slowly being reached, one year at a time.
I aim to do a similar thing but believe that the timescale of 10k years is too short, so I instead opt for 1 million years, given that 1 million years is roughly the base rate for hominin species survival duration. It is also very interesting to imagine what humanity will be doing (should they persist) 1 million years following the start of the agricultural revolution. So, 12022 0012022.
From An upper bound for the background rate of human extinction (Snyder-Beattie et al., 2019)
Snyder-Beattie, Andrew E., Toby Ord, and Michael B. Bonsall. “An upper bound for the background rate of human extinction.” Scientific reports 9, no. 1 (2019): 1-9.
Hominin survival times. Next, we evaluate whether the upper bound is consistent with the broader hominin fossil record. There is strong evidence that Homo erectus lasted over 1.7 Myr and Homo habilis lasted 700 kyr [21], indicating that our own species’ track record of survival exceeding 200 kyr is not unique within our genus. Fossil record data indicate that the median hominin temporal range is about 620 kyr, and after accounting for sample bias in the fossil record this estimate rises to 970 kyr [22] . Although it is notable that the hominin lineage seems to have a higher extinction rate than those typical of mammals, these values are still consistent with our upper bound. It is perhaps also notable that some hominin species were likely driven to extinction by our own lineage [34], suggesting an early form of anthropogenic extinction risk.
I will close this shortform post here, but definitely want to parse out my thoughts concerning humanity’s future more in subsequent posts, and enjoyed writing this first post.
I have (what may be) a simple question—please forgive my ignorance: Roughly speaking, how complex is this capability, i.e. writing Quines? Perhaps stated differently, how surprising is this feat? Thank you for posting about / bringing attention to this.
My suggestions regarding the epistemics of the original post are fairly in line with the content in your first paragraph. I think allocating decision weight in proportion to the expected impacts different scenarios have on your life is the correct approach. Generating scenarios and forecasting their likelihood is difficult, and there is also a great deal of uncertainty with how you should change your behavior in light of these scenarios. I think that making peace with the outcomes of disastrous scenarios that you or humanity cannot avoid is a strong action-path for processing thinking about uncontrollable scenarios. As for scenarios that you can prepare for, such as the effects of climate change, shallow AI, embryo selection / gene-editing, and forms of gradual technological progress, among other things, perhaps determining what you value and want if you could only live / live comfortably for the next 5, 10, 15, 20, 30, etc… years might be a useful exercise, since each of these scenarios (e.g., only living 5 more years vs. only living 10 more years vs. only more 5 years in global business-as-usual) might lead you to make different actions. I am in a similar decision-boat as you, as I believe that in coming years the nature of the human operations in the world will change significantly and on many fronts. I am in my early 20s, I have been doing some remote work / research in the areas of forecasting and ML, want to make contributions to AI Safety, want to have children with my partner (in around 6 years), do not know where I would like to live, do not know what my investment behaviors should be, do not know what proportion of my time should be spent doing such things as reading, programming, exercising, etc… A useful heuristic for me has been to worry less. I think moving away from people and living closer to the wilderness have benefitted me as well; the location I am in currently seem robust to climate change and mass exoduses from cities (should they ever occur), has few natural disasters, has good air quality, is generally peaceful and quiet, and is agriculturally robust w/ sources of water. Perhaps finding some location or set of habits that are in line with “what I hoped to retire into / do in a few years or what I’ve always desired for myself” might make for a strong remainder-of-life / remainder-of-business-as-usual, whichever you attach more weight to.
Has anyone here considered working on the following?:
https://www.super-linear.org/prize?recordId=recT1AQw4H7prmDE8
$500 prize pool for creating an accurate, comprehensive, and amusing visual map of the AGI Safety ecosystem, similar to XKCD’s map of online communities or Scott Alexander’s map of the rationalist community.
Payout will be $400 to the map which plex thinks is highest quality, $75 to second place, $25 to third. The competition will end one month after the first acceptable map is submitted, as judged by plex.
Resources, advice, conditions:
This is a partial list of items which might make sense to include.
You are advised to iterate, initially posting a low-effort sketch and getting feedback from others in your network, then plex.
You may create sub-prizes on Bountied Rationality for the best improvements to your map (if you borrow ideas from other group’s public bounties and win this prize you must cover the costs of their bounty payouts).
You may use DALL-E 2 or other image generation AIs to generate visual elements.
You may collaborate with others and split the prize, but agree internally on roles and financial division, and distribute it yourselves.
You can use logos as fair-use, under editorial/educational provision.
You can scale items based on approximate employee count (when findable), Alexa rank (when available), number of followers (when available) or wild guess (otherwise).
You agree to release the map for public use.
I asked about FLI’s map in this question and it received some traction. I might go ahead and try this, starting with FLI’s map and expanding off of it.
I have being reading content from LW sporadically for the last several years; only recently, though, did I find myself visiting here several times per day, and have made an account given my heightened presence.
From what I can tell, I am in a fairly similar position to Jozdien, and am also looking for some advice.
I am graduating with a B.A. in Neuroscience and Mathematics this January. My current desire is to find remote work (this is important to me) that involves one or more of: [machine learning, mathematics, statistics, global priorities research].
In spirit of the post The topic is not the content, I would like to spend my time (the order is arbitrary) doing at least some of the following: discussing research with highly motivated individuals, conducting research on machine learning theory, specifically relating to NN efficiency and learnability, writing literature reviews on cause areas, developing computational models and creating web-scraped datasets to measure the extent of a problem or the efficacy of potential solution, and recommending courses-of-action (based on my assessments generated from the previous listed entity).
Generally, my skill set and current desires lead me to believe that I will find advancing the capabilities of machine learning systems, quantifying and defining problem afflicting humans, and synthesizing research literature to inform action, all fulfilling, and that I will be effective in working on these things done as well. My first question: How should I proceed with satisfying my desires, i.e. what steps should I take to determine whether I enjoy machine learning research more than global priorities research, or vice versa?
It is my plan to attend graduate school for one of [machine learning, optimization, computer science] at some point in life (my estimate is around the age of 27-30), but I would first like to experiment with working at an EA affiliated organization (global priorities research) or in industry doing machine learning research. I am aware that it is difficult to get a decent research position without a Master’s or PhD, but I believe it is still worth trying for. I have worked on research projects in computational neuroscience/chemistry for one company and three different professors at my school, but none of these projects turned into publications. This summer, I am at a research internship and am about to submit my research on ensemble learning for splice site prediction for review in the journal Bioinformatics—I am 70% confident that this work will get published, with me as the first author. Additionally, my advisor said he’d be willing to work with me to publish a dataset of 5000 fossils image I’ve taken of various fossils from my collection. While this work is not in machine learning theory, it increases my capacity for being hired and is helping me refine my competence as a researcher / scientist.
Several weeks ago, I applied to Open Philanthropy’s Research Fellow position, which is a line of work I would love doing and would likely be effective at. They will contact me with updates on or before August 4th, and I anticipate that I will not be given the several follow-up test assignments OpenPhil uses to evaluate its candidates, provided that their current Research Fellows have more advanced degrees and more experience with the social sciences than I do. I have not yet applied to any organizations whose focus is machine learning, but will likely begin doing so during this coming November. This brings me to my final questions: What can I do to increase my capacity for being hired by an organization whose focus is global priorities research? Also, which organizations or institutions might be a good fit for both my skills in computational modeling and machine learning and my desire to conduct global priorities research?
Any other advice is welcome, especially advice of the form “You can better prioritize / evaluate your desires by doing [x]”, “You seem to have [x] problem in your style of thought / reasoning, which may be assuaged by reading [y] and then thinking about [z]”, or “You should look into work on [x], you might like it given your desire to optimize/measure/model things”. Thank you, live well.
This entire post reminded me of this section from Human Compatible, especially the section I’ve put in bold:
“There are some limits to what AI can provide. The pies of land and raw materials are not infinite, so there cannot be unlimited population growth and not everyone will have a mansion in a private park. (This will eventually necessitate mining elsewhere in the solar system and constructing artificial habitats in space; but I promised not to talk about science fiction.) The pie of pride is also finite: only 1 percent of people can be in the top 1 percent on any given metric. If human happiness requires being in the top 1 percent, then 99 percent of humans are going to be unhappy, even when the bottom 1 percent has an objectively splendid lifestyle. It will be important, then, for our cultures to gradually down-weight pride and envy as central elements of perceived self-worth.”
In scenarios where transformative AI can perform nearly all research or reasoning tasks for humanity, my pride will be hurt to some degree. I also believe that I will not be in the 1% of humans still in work, perhaps overseeing the AI, and I find this prospect somewhat bleak, though I imagine that the severity of this sentiment would wane with time, especially if my life and the circumstances for humanity were otherwise great as a result of the AI.
The first point of your response calms me somewhat. Focusing more in the near-future on my body, health, friends, family, etc… the baselines would probably be good preparation for a future where AI forwards the state of human affairs to the point where humans are not needed for reasoning or research tasks.
If there are any paper reading clubs out there that ask the presenter to replicate the results without looking at the author’s code, I would love to join
This is something that I would be interested in as well. I’ve been attempting to reproduce MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention from scratch, but I am finding it difficult, partially due to my present lack of experience with reproducing DL papers. The code for MQTransformer is not available, at least to my knowledge. Also, there are several other papers which use LSTMs or Transformers architectures for forecasting that I hope to reproduce and/or employ for use on Metaculus API data in the coming few months. If reproducing ML papers from scratch and replicating their results (especially DL for forecasting) sounds interesting (perhaps I could publish these reproductions w/ additional tests in ReScience C) to anyone, please DM me, as I would be willing to collaborate.
Nice, I didn’t know OpenPhil had calibration training.
It is difficult to use SPIES for the calibration training—I kept running out of time when using my implementation in Python. To still compare the methods, I copied some questions and gave a confidence interval and SPIES estimate. Here are the results; I’ve only included 5 questions, but from what I’ve done, it seems SPIES helps me to narrow might 80% confidence intervals.
1. In which year was the US Open decided for the first time by ‘sudden death’?
CI: 1900-2000
SPIES: 1938-2000 : 1900-1924 16.54%; 1925-1948 24.63%; 1949-1972 29.41%; 1973-1996 29.41%
Actual Value: 1990
2. In what year did Emerson Fittipaldi first win the World Championship?
CI: 1910-2010
SPIES: 1939-2010 : 1910-1935 18.18%; 1936-1960 11.36%; 1961-1985 36.36%; 1986-2010 34.09%
Actual Value: 1972
3. In what year was rayon first produced in the United States?
CI: 1780-2005
SPIES: 1836-1996 : 1780-1836 16.28%; 1837-1892 27.91%; 1893-1948 27.91%; 1949-2005 27.91%
Actual Value: 1910
4. When was the first Winter Olympics held?
CI: 1880-1980
SPIES: 1914-1980 : 1880-1905 13.04%; 1906-1930 21.74%; 1931-1955 26.09%; 1956-1980 39.13%
Actual Value: 1924
5. In which year did Frankie Goes to Hollywood form?
CI: 1910-2000
SPIES: 1938-2000 : 1910-1932 15.0%; 1933-1954 20.0%; 1955-1976 30.0%; 1977-2000 35.0%
Actual Value: 1980
Your description of the processes you employ to enhance creativity in your students might be better described as behavioral algorithms. I would describe a behavioral algorithm as a sequence of behaviors or stimuli that increases the likelihood that some behavior, thought, or sentiment occurs or changes in a directed manner. While I have not found many instances of this phrase being used in this way (a quick Google Scholar search doesn’t return much), I would argue that this definition is still valuable. A hypothetical example (little bearing on reality) of a behavioral algorithm of the form ([behavior/stimuli sequence] → [outcome]) could be [10 minutes meditation → 3 minutes mild-intensity exercise → 10 minutes meditation → 10 minutes of any music → green tea] → [reduction in temporally local depressive feelings].
I have done some observational experiments (informal) to gauge behavioral algorithms that enhance creativity and that reduce depressive feelings. I will briefly describe the latter, as it pertains to the topic of this post.
During my subway commutes several summers ago, I forced myself to generate 5 features of society or life that I thought could be better, and then I forced myself to come up with a solution to each of these. Before generating the problems, I would sit still for 15 minutes and try to avoid thinking about anything. I did this exercise (the still-mindedness and problem/solution generation) each day for one month. At the end of the exercise I found that it became much easier to generate problems and solutions, that my descriptions of the solutions became more detailed and practical, and that the solutions themselves seemed to be slightly more creative (this is subjective; I would say that the solutions became somewhat more clever). It could be the case that my creativity was not actually increasing, and rather that I was simply getting more efficient at generating ideas of the same degree of creativity (I don’t know how creativity is measured) as I had going into the informal experiment. Different approaches might be needed for improving the ‘cleverness’ or depth of a creative idea versus improving the rate of creative idea generation, where the level of creativity in this case is equal to the person’s baseline creativity.
I have not devoted the necessary time to generate robust experimental designs to test different behavioral algorithms for improving various dimensions of my health, creativity, or productivity, but I think it’d be interesting to scope out this topic more. It would be awesome if you could test out several variations of the current behavioral algorithms you use with your students, and then report how the outcomes differ between variations.
Thank you for this post as well!
Similar situation in my life...there are times when I am attempting to fall asleep and I realize suddenly that I am clenching my teeth and that there is considerable tension in my face. Beginning from my closed eyes down to my mouth I relax my facial muscles and I find it becomes easy for me to fall asleep.
In waking life too there are instances where I recognize my facial and bodily tension but I notice these situations less often than when I am trying to sleep. Being conscious of tension in my body and then addressing that tension when it occurs has on occasion made me calmer.
I am uncertain regarding the utility of more expensive interventions and or intensive investigations for alleviating tension, and have not really looked into it (but want to, somewhat).
Thank you for taking a look Martin Vlach.
For the latter comment, there is a typo. I meant:
Coverage of this topic is sparse relative to coverage of CC’s direct effects.
The idea is that the corpus of work on how climate change is harmful to civilization includes few detailed analyses of the mechanisms through which climate change leads to civilizational collapse but does includes many works on the direct effects of climate change.
For the former comment, I am not sure what you mean w.r.t “engender”.
Definition of engender
2 : to cause to exist or to develop : produce
“policies that have engendered controversy”
Same question here as well.
I applied to this several days ago (5 days, I believe). Is / was there any formal confirmation that my application was received? I am mildly concerned, as the course begins soon. Thank you.
This is great; thank you! I will send an email in the coming month. Also, I suppose a quick clarification, but what’s the relation between: MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention and MQTransformer: Multi-Horizon Forecasts with Context Dependent Attention and Optimal Bregman Volatility
I would like to think about this more, but thank you for posting this and switching my mind from System I to System II
Purpose
This shortform serves as a repository for my initial considerations for my forecast on the following Metaculus question (* see below for question link):
Forecast
How many gene-edited babies will have been born worldwide by the end of 2029?
This question was authored by Pablo on Metaculus.
After reading this, the following questions come to mind:
How many gene-edited babies have been born thus far (as of February 19th 2022)? [base-rate]
What are people’s current attitudes towards human gene-editing?
How might people’s attitudes towards human gene-editing change?
How much do people’s attitudes towards human gene-editing affect the regulations and policies on human gene-editing?
Are there historical technologies that fill similar societal niches, and if so, how did they turn out?
How likely is human gene-editing to take off (also, given this, how much will it take off?)?
(1) This source (https://getanimated.uk.com/meet-lulu-and-nana-the-worlds-first-crispr-genome-edited-babies/), along with Eli’s comment on this question (https://www.metaculus.com/questions/3289/how-many-gene-edited-babies-will-have-been-born-worldwide-by-the-end-of-2029/#comment-79822), make me believe that the base rate is 2 (I count the twins, Lula and Nana, as a single instance of gene-edited babies) in 2022 − 2019 = 3 years (the question was written in 2019).
(2-4 & 6) Human gene-editing seems to be highly divisive in the scientific community (see https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3000224&type=printable). Also, generally, people seem to be averse to gene-editing in humans for enhancement purposes, but seem to agree that gene-editing may be useful for treatment of disease (see https://www.pewresearch.org/science/2020/12/10/biotechnology-research-viewed-with-caution-globally-but-most-support-gene-editing-for-babies-to-treat-disease/). This previous source from the PEW Research Center also indicates that religion is the dominant factor in people’s acceptance of human gene-editing. Indian survey respondents supported gene-editing in humans the most, and deemed it appropriate by a large margin.
Given this information, I doubt this base-rate is of much use, and believe instead that the use of gene-editing in humans will follow a nonlinear growth trajectory, with initialization occurring when the first nation legalizes human gene-editing.
I believe that, should people come to accept or desire human gene-editing, be it for treatment or enhancement, the scientific community will be unable to prevent these technologies from being used, somewhere.
Next, I believe that India might be one of the first few countries to approve use of human gene-editing; should India, or a cohort of other nations, adopt human gene-editing, I believe that this might rapidly (within 1 year) shift the Overton Window towards acceptance of human gene-editing, especially if the results of the editing appear to be promising.
Okay, so will any nation widely adopted human gene-editing? A Google search of “human gene editing india” produces results that give credence to the idea that, while human gene-editing in banned in India, there are many ambiguities in the laws, and many laws do not seem readily enforced. Many other nations surveyed in the PEW report also seem to have regulations on human gene-editing existing in “legal limbo”.
(5 & forecast) I would put the probability of at least one country adopting human gene-editing in the next 8 years (2029 is about 8 years away) at 35% (adoption scenario). So, the probability that no country adopts human gene-editing in the next 8 years would 65% (non-adoption scenario).
The adoption scenario (some nation(s) adopt(s) human gene-editing before 2029): I believe that the number of gene-edited humans might grow at a similar rate to how Internet usage grew (https://en.wikipedia.org/wiki/History_of_the_Internet#1989%E2%80%932004:_Rise_of_the_global_Internet,_Web_1.0 & https://www.internetworldstats.com/emarketing.htm), i.e. adoption of human gene-editing will be limited for the first 2-5 years, perhaps to treatment oriented use cases, before truly taking off (I believe usage for enhancement purpose might follow treatment usage in ~3 years). I believe this because both human gene-editing and the Internet appear to both be transformative technologies, and sentiment on human gene-editing appears similar (maybe more negative than simply disinterested) to early Internet usage sentiment. Sentiment against human gene-editing globally seems strong enough to make me believe that any “initial usage” will not occur until at least 2025. I believe that initial usage (including scenarios where more than a single nation adopts human gene-editing) over these 3-4 years will very likely be less than 10000 use cases. I believe that the first year might see something on the order of 250-1000 cases (25% lower bound − 75% upper bound), and, following the pattern of Internet growth, will increase to ~560-2250, then to ~1090-4365, and finally to ~2290-9170.
The non-adoption scenario (no nation legalizes human gene-editing): In this scenario, I believe there may still be somewhere between 5 and 100 (25% lower bound and 75% upper bound, respectively) illegal gene-edited births in the 8 years leading up to 2029.
So, altogether, the expected lower bound is [0.35 x 2290] + [0.65 x 5] = 801.5 + 3.25 = 804.75 = ~805 births, and the expected upper bound is [0.35 x 9170] + [0.65 x 100] = 3209.5 + 6.5 = 3216.0.
Until I take another look at this question, I put my current forecast at 805-3216.
(*)