I’m thinking about writing a practical guide to having polygenically screened children (AKA superbabies) in 2025. You can now increase your kids IQ by about 4-10 points and/or decrease their risk of some pretty serious diseases by doing IVF and picking an embryo with better genetic predispositions.
There’s a bunch of little shit almost no one knows that can have a pretty significant impact on the success rates of the process like how to find a good clinic, what kinds of questions to ask your physician, how to get meds cheaply, how to get the most euploid embryos per dollar, which polygenic embryo selection company to pick etc.
I think this would be quite valuable! I don’t know how it compares to other things you do, but I definitely know a lot of people who end up doing a lot of their own research here in duplicated ways.
A friend of mine visited the recent ′ eugenics’* conference in the Bay. It had all the prominent people in this area attending iirc, eg Steve Hsu. My friend told me he asked around about how realistic these numbers were. He told me that the majority of serious people he spoke with were skeptical of IQ gains >~3 points.
This was perhaps an understandable viewpoint to hold in June when the best publicly available IQ predictor from the EA4 study only correlated with actual IQ at .3 in the general population (and less within family, which is what matters for embryo selection).
I happened to have spoken with some of the people from Herasight at the time and knew they had a predictor that performed quite a bit better than what was publicly available, which is where my optimism was coming from.
In October they finally published their validation white paper so now I can point to something other than private conversations to show you really can get as big of a boost as claimed.
Some people are still skeptical. Sasha Gusev for example has claimed that Herasight applied a “fudge factor” to get to 20% of variance explained by adjusting adjusting for the noisiness of the UKBB and ABCD cohorts. This is based on the fact that their raw predictor explained 13.7% of the within-family variance, and they applied an “adjustment factor” to that based on the fact that the test they validated on only has a test-retest correlation of .61.
I don’t find the critique all that convincing, though my knowledge in the reliability of different psychometric methods is still pretty limited so take my opinion with a grain of salt. It’s well known that UKBB’s fluid intelligence test is pretty noisy, and the method they used to correct for that (disattenuation) seems pretty bog-standard.
They also published a follow-up in which they used another method, latent variable modeling, which produced similar results.
All that being said, it would be better if there were third-party benchmarks like we have in the AI field to evaluate the relative strength of all these different predictors.
I think it’s probably about time to create or fund an org to do this kind of thing. We need something like METR or MLPerf for genetic predictors. No such benchmarks exist right now.
This is actually a real problem. No dataset exists right now that we can guarantee hasn’t been used in the training of these models. And while I basically believe that most of these companies have done their evaluations honestly (with the possible exception of Nucleus), relying on companies honestly reporting predictor performance when they have an economic incentive to exaggerate or cheat is not ideal.
I think you could actuallly start out with an incredibly small dataset. Even just 100 samples would be enough to make a binary “bullshit” or “plausible” validation set on continuous value predictors like height or IQ.
I’m not planning to have kids soon but if I did, this would be worth several thousand dollars to me. IVF is such an investment and things are moving so rapidly that information on how to do things right should be extremely valuable.
Very helpful. If you are interested in adding a section about other countries/regions, I have done some research on various regulatory regimes (mainly in Europe). Happy to share.
I think an important point here is that GeneSmith actually wrote a post that’s of high quality and interest to billionaires that people pass around.
The mechanism that he described is not about billionaires reading random posts on the front page but about high value posts being passed around. Billionares have network that help them to get send post that are valuable to them.
The point I’m making doesn’t depend on truth of the claim or validity of the argument (from the GeneSmith post followup) that suggests it. What I’m suggesting implies that public legible discussion of truth of the claim or of validity of the arguments is undesirable.
I think there’s a pretty strong default that discussing the truth of claims that actually matter to the decisions people make is worthwhile on LessWrong.
Saying, we can speak about the truth of some things but not about those that are actually really motivating for real-world decision-making seems to me like it’s not good for LessWrong culture.
Sure, that’s a consideration, but it’s a global consideration that still doesn’t depend on truth of the claim or validity of the argument. Yes Requires the Possibility of No, not discussing a Yes requires not discussing a No, and conversely. In the grandparent comment, I merely indicated that failing to discuss truth of the claim or validity of the argument is consistent with the point I was making.
If the claim is sufficiently true and becomes sufficiently legible to casual observers, this shifts the distribution of new users, and behavior of some existing users, in ways that seem bad overall.
Tomorrow, everyone will have their Patreon account added to their LW profile, and all new articles will be links to Substack, where the second half of the article is available for paying subscribers only. :D
To be clear, I wish more LW users had Patreons linked to from their profiles/posts. I would like people to have the option of financially supporting great writers and thinkers on LessWrong.
I agree that the second thing sounds v damaging to public discourse.
Consider finding a way to integrate Patreon or similar services into the LW UI then. That would go a long way towards making it feel like a more socially acceptable thing to do, I think.
That could be great especially for people who are underconfident and/or procrastinators.
For example, I don’t think anyone would want to send any money to me, because my blogging frequency is like one article per year, and the articles are perhaps occasionally interesting, but nothing world-changing. I’m like 99% sure about this. But just in the hypothetical case that I am wrong… or maybe if in future my frequency and quality of blogging will increase but I will forget to set up a way to sponsor me… if I find out too late that I was leaving money on the table, while spending 8 hours a day at a job that doesn’t really align with my values, I would be really angry.
The easiest solution could be like: if someone has a Patreon link, put it in the profile; but if someone doesn’t, put there a button like “dude, too bad you don’t have a Patreon account, otherwise I would right now donate you $X per month”. And if someone clicks it and specifies a number, remember it, and when the total sum of hypothetical missed donations reaches a certain threshold, for example $1000 a month, display a notification to the user. That should be motivating enough to set up the account. And when the account is finally entered in the profile, all users who clicked the button in the past would be notified about it. -- So if the people actually want to send you money, you will find out. And if they don’t, you don’t need to embarrass yourself with setting up and publishing the account.
I also have some negative feelings about it. I think the most likely reason is that websites that offer the option of payment are often super annoying about it. Like, shoving the “subscribe” button in your face all the time, etc. That’s usually because the website itself gets a cut from the money sent. I think if this incentive does not exist, then the LW developers could do this option very unobtrusive. Like, maybe only when you make a strong upvote, display a small “$” icon next to the upvote arrow, with tooltip “would you like to support the author financially?” and only after clicking on it, show the Patreon link, or the “too bad you don’t have Patreon” button. Also, put the same “$” icon in the author’s profile. -- The idea is that only the people who bother to look at author’s profile or who made a strong upvote would be interested in sending money, so the option should be only displayed to them. Furthermore, hiding the information behind a small “$” icon that needs to be clicked first makes it as unobtrusive as possible. (Even less obtrusive than having the Patreon link directly in the profile, which is how people would do it now.)
Linkposts to articles that are subscriber-only should be outright banned. (And if they are not, I would downvote them.) If you require payment for something, don’t shove it to my face. It is okay to make a few free articles and use them as advertisement for the paid ones. But everyone who votes on an article should see the same content of the article. -- But that’s how it de facto works now; I don’t really remember seeing a paid article linked from LW.
Basically, if someone wants to get paid and believes that they will get the readers, there is already a standard way to do that: make a Substack account, post there some free and paid articles, and link the free ones from LW. The advantage of my proposal is the feedback for authors who were not aware that they have a realistic option to get paid for their writing. Plus if we have a standardized UI for that, the authors do not need to think about whether to put the links in their profiles or their articles, how much would be too annoying and how much means leaving money on the table.
I wish more LW users had Patreons linked to from their profiles/posts. I would like people to have the option of financially supporting great writers and thinkers on LessWrong.
is this something you’ve considered building into LW natively?
A norm is more effective when it acts at all the individual relatively insignificant steps, so that they don’t add up. The question of whether the steps are pointing in the right direction is the same for all the steps, so could as well be considered seriously at the first opportunity, even when it’s not a notable event on object level.
For the record, the “:D” at the end of my comment only meant that I don’t think that literally everyone will do this tomorrow. But yes, the temptation to slightly move in given direction is real—I can feel it myself (unfortunately I have no Patreon account and no product to sell), though I will probably forget this tomorrow—and some people will follow the nudge more than the others. Also, new people may be tempted to join for the wrong reasons.
On the other hand, even before saying it explicitly, this hypothesis was… not too surprising, in my opinion. I mean, we already knew that some rich people are supporting LW financially; it would make sense if they also read it occasionally. Also, we already had lots of people trying to join LW for the wrong reasons; most of them fail. So I think that the harm of saying this explicitly is small.
For the record I think regular users being aware of the social and financial incentives on the site is worth the costs of people goodharting on them. We have a whole system set up for checking the content of new users that the team goes through daily to make sure it meets certain quality bars, and I still think that having a 100+ karma or curated post basically always requires genuinely attempting to make a valuable intellectual contribution to the world. That’s not a perfect standard but it’s help up in the face of a ton of other financial incentives (be aware that starting safety researcher salaries at AI capabilities companies are like $300k+).
Why would the shift be bad? More politics, more fakery, less honest truth-seeking? Yeah that seems bad. There are benefits too though (e.g. makes people less afraid to link to LW articles). Not sure how it all shakes out.
I thought this was kind of known that few of the billionaires were rationalist adjacent in a lot of ways, given effective altruism caught on with billionaire donors, also in the emails released by OpenAI https://openai.com/index/openai-elon-musk/ there is link to slatestarcodex forwarded to elonmusk in 2016, elon attended eliezer’s conference iirc. There are a quite of places you could find them in the adjacent circles which already hint to this possibility like basedbeffjezos’s followers being billionaires etc. I was kind of predicting that some of them would read popular things on here as well since they probably have overlapping peer groups.
It’s one of the most important issues ever, and has a chance of solving mass instability/unhappiness caused by wide inequality in IQs in the population, by giving the less-endowed a shot to increase their intelligence.
How are people here dealing with AI doomerism? Thoughts about the future of AI and specifically the date of creation of the first recursively self-improving AGI have invaded almost every part of my life. Should I stay in my current career if it is unlikely to have an impact on AGI? Should I donate all of my money to AI-safety-related research efforts? Should I take up a career trying to convince top scientists at DeepMind to stop publishing their research? Should I have kids if that would mean a major distraction from work on such problems?
More than anything though, I’ve found the news of progress in the AI field to be a major source of stress. The recent drops in Metaculus estimates of how far we are from AGI have been particularly concerning. And very few people outside of this tiny almost cult-like community of AI safety people even seem to understand the unbelievable level of danger we are in right now. It often feels like there are no adults anywhere; there is only this tiny little island of sanity amidst a sea of insanity.
I understand how people working on AI safety deal with the problem; they at least can actively work on the problem. But how about the rest of you? If you don’t work directly on AI, how are you dealing with these shrinking timelines and feelings of existential pointlessness about everything you’re doing? How are you dealing with any anger you may feel towards people at large AI orgs who are probably well-intentioned but nonetheless seem to be actively working to increase the probability of the world being destroyed? How are you dealing with thoughts that there may be less than a decade left until the world ends?
Imagine for a moment you have a powerful AI that is aligned with your particular interests.
In areas where the AI is uncertain of your wants, it may query you as to your preferences in a given situation. But these queries will be “expensive” in the sense that you are a meat computer that runs slowly, and making copies of you is difficult.
So in order to carry out your interests at any kind of scale with speed, it will need to develop an increasingly robust model of your preferences.
Human values are context-dependent (see shard theory and other posts on this topic), so accurately modeling one’s preferences across a broad range of environments will require capturing a large portion of one’s memories and experiences, since those things affect how one responds to certain stimuli.
In the limit, this internal “model” in the AI will be an upload. So my current model is that we just get brain uploading by default if we create aligned AGI.
This may seem like small peanuts compared to AI ending the world, but I think it will be technically possible to de-anonymize most text on the internet within the next 5 years.
Analysis of writing style and a single author’s idiosyncracies has a long history of being used to reveal the true identity of anonymous authors. It’s how the Unabomber was caught and also how JK Rowling was revealed as the author of The Cuckoo’s Calling.
Up until now it was never really viable to perform this kind of analysis at scale. Matching up the authors of various works also required a single person to have read many of the author’s previous text.
I think LLMs are going to make textual fingerprinting at a global scale possible within the next 5 years (if not already). This in turn implies that any archived writing you’ve done under a pseudonym will be attributable to you.
If we are in a simulation, it implies an answer to the question of “Why do I exist?”
Suppose the following assumptions are true:
The universe is a construct of some larger set of simulations designed by some meta-level entity who is chiefly concerned with the results of the simulation
The cost of computation to that entity is non-zero
If true, these assumptions imply a specific answer to the question “Why do I exist?” Specifically, it implies you exist because you are computationally irreducible.
By computationally irreducible, I mean that the state of the universe cannot be computed in any manner more efficient than simulating your life.
If it could, and the assumptions stated above hold true, it seems extremely likely that the simulation designer would have run a more efficient algorithm capable of producing results.
Perhaps this argument is wrong. It’s certainly hard to speculate about the motivations of a universe-creating entity. But if correct, it implies a kind of meaning for our lives: there’s no better way to figure out what happens in the simulation than you living your life. I find that to be a strangely comforting thought.
It seems like there is likely a massive inefficiency in the stock market right now in that the stocks of companies likely to benefit from AGI are massively underpriced. I think the market is just now starting to wake up to how much value could be captured by NVIDIA, TSMC and some of the more consumer facing giants like Google and Microsoft.
If people here actually believe that AGI is likely to come sooner than almost anyone expects and have a much bigger impact than anyone expect, it makes sense to buy these kind of stocks because they are likely underpriced right now.
In the unlikely event that AGI goes well, you’ll be one of the few who stand to gain the most from the transition.
I basically already made this bet to a very limited degree a few months ago and am currently up about 20% on my investment. It’s possible of course that NVIDIA and TSMC could crash, but that seems unlikely in the long run.
I think it’s time for more people in AI Policy to start advocating for an AI pause.
It seems very plausible to me that we could be within 2-5 years of recursively self-improving AGI, and we might get an AGI-light computer virus before then (Think ChaosGPT v2).
Pausing AI development actually seems like a pretty reasonable thing to most normal people. The regulatory capacity of the US government is the most functional piece, and bureaucrats put in charge of regulating something love to slow down progress.
The hardware and software aspects need to be targeted. There should be strict limits placed on training new state-of-the-art models and a program to limit sales of graphics cards and other hardware that can train the latest models.
FTX has just collapsed; Sam Bankman Fried’s net worth probably quite low
Huge news from the crypto world this morning: FTX (Sam Bankman Fried’s company and the third largest crypto exchange in the world) has paused customer withdrawals and announced it is entering negotiations with Binance to be acquired. The rumored acquisition price is $1.
This has major implications for the EA/Rationalist space, since Sam is one of the largest funders of EA causes. From what I’ve read his net worth is tied up almost entirely in FTX stock and its proprietary cryptocurrency, FTT.
I can’t find a source right now, but I think Sam’s giving accounted for about a third of all funding in the EA space. So this is going to be a painful downsizing.
The story of what happened is complicated. I’ll probably write something about it later.
Does anyone have a good method to estimate the number of COVID cases India is likely to experience in the next couple of months? I realize this is a hard problem but any method I can use to put bounds on how good or how bad it could be would be helpful.
I’m thinking about writing a practical guide to having polygenically screened children (AKA superbabies) in 2025. You can now increase your kids IQ by about 4-10 points and/or decrease their risk of some pretty serious diseases by doing IVF and picking an embryo with better genetic predispositions.
There’s a bunch of little shit almost no one knows that can have a pretty significant impact on the success rates of the process like how to find a good clinic, what kinds of questions to ask your physician, how to get meds cheaply, how to get the most euploid embryos per dollar, which polygenic embryo selection company to pick etc.
Would anyone find this useful?
I think this would be quite valuable! I don’t know how it compares to other things you do, but I definitely know a lot of people who end up doing a lot of their own research here in duplicated ways.
A friend of mine visited the recent ′ eugenics’* conference in the Bay. It had all the prominent people in this area attending iirc, eg Steve Hsu. My friend told me he asked around about how realistic these numbers were. He told me that the majority of serious people he spoke with were skeptical of IQ gains >~3 points.
*sorry I don’t remember what it was called
This was perhaps an understandable viewpoint to hold in June when the best publicly available IQ predictor from the EA4 study only correlated with actual IQ at .3 in the general population (and less within family, which is what matters for embryo selection).
I happened to have spoken with some of the people from Herasight at the time and knew they had a predictor that performed quite a bit better than what was publicly available, which is where my optimism was coming from.
In October they finally published their validation white paper so now I can point to something other than private conversations to show you really can get as big of a boost as claimed.
Some people are still skeptical. Sasha Gusev for example has claimed that Herasight applied a “fudge factor” to get to 20% of variance explained by adjusting adjusting for the noisiness of the UKBB and ABCD cohorts. This is based on the fact that their raw predictor explained 13.7% of the within-family variance, and they applied an “adjustment factor” to that based on the fact that the test they validated on only has a test-retest correlation of .61.
I don’t find the critique all that convincing, though my knowledge in the reliability of different psychometric methods is still pretty limited so take my opinion with a grain of salt. It’s well known that UKBB’s fluid intelligence test is pretty noisy, and the method they used to correct for that (disattenuation) seems pretty bog-standard.
They also published a follow-up in which they used another method, latent variable modeling, which produced similar results.
All that being said, it would be better if there were third-party benchmarks like we have in the AI field to evaluate the relative strength of all these different predictors.
I think it’s probably about time to create or fund an org to do this kind of thing. We need something like METR or MLPerf for genetic predictors. No such benchmarks exist right now.
This is actually a real problem. No dataset exists right now that we can guarantee hasn’t been used in the training of these models. And while I basically believe that most of these companies have done their evaluations honestly (with the possible exception of Nucleus), relying on companies honestly reporting predictor performance when they have an economic incentive to exaggerate or cheat is not ideal.
I think you could actuallly start out with an incredibly small dataset. Even just 100 samples would be enough to make a binary “bullshit” or “plausible” validation set on continuous value predictors like height or IQ.
https://www.lesswrong.com/posts/8ZExgaGnvLevkZxR5/attend-the-2025-reproductive-frontiers-summit-june-10-12 ?
I’m not planning to have kids soon but if I did, this would be worth several thousand dollars to me. IVF is such an investment and things are moving so rapidly that information on how to do things right should be extremely valuable.
Very helpful. If you are interested in adding a section about other countries/regions, I have done some research on various regulatory regimes (mainly in Europe). Happy to share.
Yes, please DM!
Billionaires read LessWrong. I have personally had two reach out to me after a viral blog post I made back in December of last year.
The way this works is almost always that someone the billionaire knows will send them an interesting post and they will read it.
Several of the people I’ve mentioned this to seemed surprised by it, so I thought it might be valuable information for others.
That’s not the kind of thing that’s good to legibly advertise.
I think an important point here is that GeneSmith actually wrote a post that’s of high quality and interest to billionaires that people pass around.
The mechanism that he described is not about billionaires reading random posts on the front page but about high value posts being passed around. Billionares have network that help them to get send post that are valuable to them.
The point I’m making doesn’t depend on truth of the claim or validity of the argument (from the GeneSmith post followup) that suggests it. What I’m suggesting implies that public legible discussion of truth of the claim or of validity of the arguments is undesirable.
I think there’s a pretty strong default that discussing the truth of claims that actually matter to the decisions people make is worthwhile on LessWrong.
Saying, we can speak about the truth of some things but not about those that are actually really motivating for real-world decision-making seems to me like it’s not good for LessWrong culture.
Sure, that’s a consideration, but it’s a global consideration that still doesn’t depend on truth of the claim or validity of the argument. Yes Requires the Possibility of No, not discussing a Yes requires not discussing a No, and conversely. In the grandparent comment, I merely indicated that failing to discuss truth of the claim or validity of the argument is consistent with the point I was making.
Why not?
If the claim is sufficiently true and becomes sufficiently legible to casual observers, this shifts the distribution of new users, and behavior of some existing users, in ways that seem bad overall.
Tomorrow, everyone will have their Patreon account added to their LW profile, and all new articles will be links to Substack, where the second half of the article is available for paying subscribers only. :D
To be clear, I wish more LW users had Patreons linked to from their profiles/posts. I would like people to have the option of financially supporting great writers and thinkers on LessWrong.
I agree that the second thing sounds v damaging to public discourse.
Consider finding a way to integrate Patreon or similar services into the LW UI then. That would go a long way towards making it feel like a more socially acceptable thing to do, I think.
That could be great especially for people who are underconfident and/or procrastinators.
For example, I don’t think anyone would want to send any money to me, because my blogging frequency is like one article per year, and the articles are perhaps occasionally interesting, but nothing world-changing. I’m like 99% sure about this. But just in the hypothetical case that I am wrong… or maybe if in future my frequency and quality of blogging will increase but I will forget to set up a way to sponsor me… if I find out too late that I was leaving money on the table, while spending 8 hours a day at a job that doesn’t really align with my values, I would be really angry.
The easiest solution could be like: if someone has a Patreon link, put it in the profile; but if someone doesn’t, put there a button like “dude, too bad you don’t have a Patreon account, otherwise I would right now donate you $X per month”. And if someone clicks it and specifies a number, remember it, and when the total sum of hypothetical missed donations reaches a certain threshold, for example $1000 a month, display a notification to the user. That should be motivating enough to set up the account. And when the account is finally entered in the profile, all users who clicked the button in the past would be notified about it. -- So if the people actually want to send you money, you will find out. And if they don’t, you don’t need to embarrass yourself with setting up and publishing the account.
I also have some negative feelings about it. I think the most likely reason is that websites that offer the option of payment are often super annoying about it. Like, shoving the “subscribe” button in your face all the time, etc. That’s usually because the website itself gets a cut from the money sent. I think if this incentive does not exist, then the LW developers could do this option very unobtrusive. Like, maybe only when you make a strong upvote, display a small “$” icon next to the upvote arrow, with tooltip “would you like to support the author financially?” and only after clicking on it, show the Patreon link, or the “too bad you don’t have Patreon” button. Also, put the same “$” icon in the author’s profile. -- The idea is that only the people who bother to look at author’s profile or who made a strong upvote would be interested in sending money, so the option should be only displayed to them. Furthermore, hiding the information behind a small “$” icon that needs to be clicked first makes it as unobtrusive as possible. (Even less obtrusive than having the Patreon link directly in the profile, which is how people would do it now.)
Linkposts to articles that are subscriber-only should be outright banned. (And if they are not, I would downvote them.) If you require payment for something, don’t shove it to my face. It is okay to make a few free articles and use them as advertisement for the paid ones. But everyone who votes on an article should see the same content of the article. -- But that’s how it de facto works now; I don’t really remember seeing a paid article linked from LW.
Basically, if someone wants to get paid and believes that they will get the readers, there is already a standard way to do that: make a Substack account, post there some free and paid articles, and link the free ones from LW. The advantage of my proposal is the feedback for authors who were not aware that they have a realistic option to get paid for their writing. Plus if we have a standardized UI for that, the authors do not need to think about whether to put the links in their profiles or their articles, how much would be too annoying and how much means leaving money on the table.
is this something you’ve considered building into LW natively?
A norm is more effective when it acts at all the individual relatively insignificant steps, so that they don’t add up. The question of whether the steps are pointing in the right direction is the same for all the steps, so could as well be considered seriously at the first opportunity, even when it’s not a notable event on object level.
For the record, the “:D” at the end of my comment only meant that I don’t think that literally everyone will do this tomorrow. But yes, the temptation to slightly move in given direction is real—I can feel it myself (unfortunately I have no Patreon account and no product to sell), though I will probably forget this tomorrow—and some people will follow the nudge more than the others. Also, new people may be tempted to join for the wrong reasons.
On the other hand, even before saying it explicitly, this hypothesis was… not too surprising, in my opinion. I mean, we already knew that some rich people are supporting LW financially; it would make sense if they also read it occasionally. Also, we already had lots of people trying to join LW for the wrong reasons; most of them fail. So I think that the harm of saying this explicitly is small.
For the record I think regular users being aware of the social and financial incentives on the site is worth the costs of people goodharting on them. We have a whole system set up for checking the content of new users that the team goes through daily to make sure it meets certain quality bars, and I still think that having a 100+ karma or curated post basically always requires genuinely attempting to make a valuable intellectual contribution to the world. That’s not a perfect standard but it’s help up in the face of a ton of other financial incentives (be aware that starting safety researcher salaries at AI capabilities companies are like $300k+).
Why would the shift be bad? More politics, more fakery, less honest truth-seeking? Yeah that seems bad. There are benefits too though (e.g. makes people less afraid to link to LW articles). Not sure how it all shakes out.
Yep. Other important people (in government, in AGI research groups) do too.
I thought this was kind of known that few of the billionaires were rationalist adjacent in a lot of ways, given effective altruism caught on with billionaire donors, also in the emails released by OpenAI https://openai.com/index/openai-elon-musk/ there is link to slatestarcodex forwarded to elonmusk in 2016, elon attended eliezer’s conference iirc. There are a quite of places you could find them in the adjacent circles which already hint to this possibility like basedbeffjezos’s followers being billionaires etc. I was kind of predicting that some of them would read popular things on here as well since they probably have overlapping peer groups.
It’s one of the most important issues ever, and has a chance of solving mass instability/unhappiness caused by wide inequality in IQs in the population, by giving the less-endowed a shot to increase their intelligence.
How are people here dealing with AI doomerism? Thoughts about the future of AI and specifically the date of creation of the first recursively self-improving AGI have invaded almost every part of my life. Should I stay in my current career if it is unlikely to have an impact on AGI? Should I donate all of my money to AI-safety-related research efforts? Should I take up a career trying to convince top scientists at DeepMind to stop publishing their research? Should I have kids if that would mean a major distraction from work on such problems?
More than anything though, I’ve found the news of progress in the AI field to be a major source of stress. The recent drops in Metaculus estimates of how far we are from AGI have been particularly concerning. And very few people outside of this tiny almost cult-like community of AI safety people even seem to understand the unbelievable level of danger we are in right now. It often feels like there are no adults anywhere; there is only this tiny little island of sanity amidst a sea of insanity.
I understand how people working on AI safety deal with the problem; they at least can actively work on the problem. But how about the rest of you? If you don’t work directly on AI, how are you dealing with these shrinking timelines and feelings of existential pointlessness about everything you’re doing? How are you dealing with any anger you may feel towards people at large AI orgs who are probably well-intentioned but nonetheless seem to be actively working to increase the probability of the world being destroyed? How are you dealing with thoughts that there may be less than a decade left until the world ends?
DE-FACTO UPLOADING
Imagine for a moment you have a powerful AI that is aligned with your particular interests.
In areas where the AI is uncertain of your wants, it may query you as to your preferences in a given situation. But these queries will be “expensive” in the sense that you are a meat computer that runs slowly, and making copies of you is difficult.
So in order to carry out your interests at any kind of scale with speed, it will need to develop an increasingly robust model of your preferences.
Human values are context-dependent (see shard theory and other posts on this topic), so accurately modeling one’s preferences across a broad range of environments will require capturing a large portion of one’s memories and experiences, since those things affect how one responds to certain stimuli.
In the limit, this internal “model” in the AI will be an upload. So my current model is that we just get brain uploading by default if we create aligned AGI.
This may seem like small peanuts compared to AI ending the world, but I think it will be technically possible to de-anonymize most text on the internet within the next 5 years.
Analysis of writing style and a single author’s idiosyncracies has a long history of being used to reveal the true identity of anonymous authors. It’s how the Unabomber was caught and also how JK Rowling was revealed as the author of The Cuckoo’s Calling.
Up until now it was never really viable to perform this kind of analysis at scale. Matching up the authors of various works also required a single person to have read many of the author’s previous text.
I think LLMs are going to make textual fingerprinting at a global scale possible within the next 5 years (if not already). This in turn implies that any archived writing you’ve done under a pseudonym will be attributable to you.
If we are in a simulation, it implies an answer to the question of “Why do I exist?”
Suppose the following assumptions are true:
The universe is a construct of some larger set of simulations designed by some meta-level entity who is chiefly concerned with the results of the simulation
The cost of computation to that entity is non-zero
If true, these assumptions imply a specific answer to the question “Why do I exist?” Specifically, it implies you exist because you are computationally irreducible.
By computationally irreducible, I mean that the state of the universe cannot be computed in any manner more efficient than simulating your life.
If it could, and the assumptions stated above hold true, it seems extremely likely that the simulation designer would have run a more efficient algorithm capable of producing results.
Perhaps this argument is wrong. It’s certainly hard to speculate about the motivations of a universe-creating entity. But if correct, it implies a kind of meaning for our lives: there’s no better way to figure out what happens in the simulation than you living your life. I find that to be a strangely comforting thought.
It seems like there is likely a massive inefficiency in the stock market right now in that the stocks of companies likely to benefit from AGI are massively underpriced. I think the market is just now starting to wake up to how much value could be captured by NVIDIA, TSMC and some of the more consumer facing giants like Google and Microsoft.
If people here actually believe that AGI is likely to come sooner than almost anyone expects and have a much bigger impact than anyone expect, it makes sense to buy these kind of stocks because they are likely underpriced right now.
In the unlikely event that AGI goes well, you’ll be one of the few who stand to gain the most from the transition.
I basically already made this bet to a very limited degree a few months ago and am currently up about 20% on my investment. It’s possible of course that NVIDIA and TSMC could crash, but that seems unlikely in the long run.
I think it’s time for more people in AI Policy to start advocating for an AI pause.
It seems very plausible to me that we could be within 2-5 years of recursively self-improving AGI, and we might get an AGI-light computer virus before then (Think ChaosGPT v2).
Pausing AI development actually seems like a pretty reasonable thing to most normal people. The regulatory capacity of the US government is the most functional piece, and bureaucrats put in charge of regulating something love to slow down progress.
The hardware and software aspects need to be targeted. There should be strict limits placed on training new state-of-the-art models and a program to limit sales of graphics cards and other hardware that can train the latest models.
FTX has just collapsed; Sam Bankman Fried’s net worth probably quite low
Huge news from the crypto world this morning: FTX (Sam Bankman Fried’s company and the third largest crypto exchange in the world) has paused customer withdrawals and announced it is entering negotiations with Binance to be acquired. The rumored acquisition price is $1.This has major implications for the EA/Rationalist space, since Sam is one of the largest funders of EA causes. From what I’ve read his net worth is tied up almost entirely in FTX stock and its proprietary cryptocurrency, FTT.I can’t find a source right now, but I think Sam’s giving accounted for about a third of all funding in the EA space. So this is going to be a painful downsizing.The story of what happened is complicated. I’ll probably write something about it later.Just read this: https://forum.effectivealtruism.org/posts/yjGye7Q2jRG3jNfi2/ftx-will-probably-be-sold-at-a-steep-discount-what-we-know
Does anyone have a good method to estimate the number of COVID cases India is likely to experience in the next couple of months? I realize this is a hard problem but any method I can use to put bounds on how good or how bad it could be would be helpful.