How are people here dealing with AI doomerism? Thoughts about the future of AI and specifically the date of creation of the first recursively self-improving AGI have invaded almost every part of my life. Should I stay in my current career if it is unlikely to have an impact on AGI? Should I donate all of my money to AI-safety-related research efforts? Should I take up a career trying to convince top scientists at DeepMind to stop publishing their research? Should I have kids if that would mean a major distraction from work on such problems?
More than anything though, I’ve found the news of progress in the AI field to be a major source of stress. The recent drops in Metaculus estimates of how far we are from AGI have been particularly concerning. And very few people outside of this tiny almost cult-like community of AI safety people even seem to understand the unbelievable level of danger we are in right now. It often feels like there are no adults anywhere; there is only this tiny little island of sanity amidst a sea of insanity.
I understand how people working on AI safety deal with the problem; they at least can actively work on the problem. But how about the rest of you? If you don’t work directly on AI, how are you dealing with these shrinking timelines and feelings of existential pointlessness about everything you’re doing? How are you dealing with any anger you may feel towards people at large AI orgs who are probably well-intentioned but nonetheless seem to be actively working to increase the probability of the world being destroyed? How are you dealing with thoughts that there may be less than a decade left until the world ends?
It seems like there is likely a massive inefficiency in the stock market right now in that the stocks of companies likely to benefit from AGI are massively underpriced. I think the market is just now starting to wake up to how much value could be captured by NVIDIA, TSMC and some of the more consumer facing giants like Google and Microsoft.
If people here actually believe that AGI is likely to come sooner than almost anyone expects and have a much bigger impact than anyone expect, it makes sense to buy these kind of stocks because they are likely underpriced right now.
In the unlikely event that AGI goes well, you’ll be one of the few who stand to gain the most from the transition.
I basically already made this bet to a very limited degree a few months ago and am currently up about 20% on my investment. It’s possible of course that NVIDIA and TSMC could crash, but that seems unlikely in the long run.
I think it’s time for more people in AI Policy to start advocating for an AI pause.
It seems very plausible to me that we could be within 2-5 years of recursively self-improving AGI, and we might get an AGI-light computer virus before then (Think ChaosGPT v2).
Pausing AI development actually seems like a pretty reasonable thing to most normal people. The regulatory capacity of the US government is the most functional piece, and bureaucrats put in charge of regulating something love to slow down progress.
The hardware and software aspects need to be targeted. There should be strict limits placed on training new state-of-the-art models and a program to limit sales of graphics cards and other hardware that can train the latest models.
This may seem like small peanuts compared to AI ending the world, but I think it will be technically possible to de-anonymize most text on the internet within the next 5 years.
Analysis of writing style and a single author’s idiosyncracies has a long history of being used to reveal the true identity of anonymous authors. It’s how the Unabomber was caught and also how JK Rowling was revealed as the author of The Cuckoo’s Calling.
Up until now it was never really viable to perform this kind of analysis at scale. Matching up the authors of various works also required a single person to have read many of the author’s previous text.
I think LLMs are going to make textual fingerprinting at a global scale possible within the next 5 years (if not already). This in turn implies that any archived writing you’ve done under a pseudonym will be attributable to you.
If we are in a simulation, it implies an answer to the question of “Why do I exist?”
Suppose the following assumptions are true:
The universe is a construct of some larger set of simulations designed by some meta-level entity who is chiefly concerned with the results of the simulation
The cost of computation to that entity is non-zero
If true, these assumptions imply a specific answer to the question “Why do I exist?” Specifically, it implies you exist because you are computationally irreducible.
By computationally irreducible, I mean that the state of the universe cannot be computed in any manner more efficient than simulating your life.
If it could, and the assumptions stated above hold true, it seems extremely likely that the simulation designer would have run a more efficient algorithm capable of producing results.
Perhaps this argument is wrong. It’s certainly hard to speculate about the motivations of a universe-creating entity. But if correct, it implies a kind of meaning for our lives: there’s no better way to figure out what happens in the simulation than you living your life. I find that to be a strangely comforting thought.
FTX has just collapsed; Sam Bankman Fried’s net worth probably quite low
Huge news from the crypto world this morning: FTX (Sam Bankman Fried’s company and the third largest crypto exchange in the world) has paused customer withdrawals and announced it is entering negotiations with Binance to be acquired. The rumored acquisition price is $1.
This has major implications for the EA/Rationalist space, since Sam is one of the largest funders of EA causes. From what I’ve read his net worth is tied up almost entirely in FTX stock and its proprietary cryptocurrency, FTT.
I can’t find a source right now, but I think Sam’s giving accounted for about a third of all funding in the EA space. So this is going to be a painful downsizing.
The story of what happened is complicated. I’ll probably write something about it later.
Does anyone have a good method to estimate the number of COVID cases India is likely to experience in the next couple of months? I realize this is a hard problem but any method I can use to put bounds on how good or how bad it could be would be helpful.
How are people here dealing with AI doomerism? Thoughts about the future of AI and specifically the date of creation of the first recursively self-improving AGI have invaded almost every part of my life. Should I stay in my current career if it is unlikely to have an impact on AGI? Should I donate all of my money to AI-safety-related research efforts? Should I take up a career trying to convince top scientists at DeepMind to stop publishing their research? Should I have kids if that would mean a major distraction from work on such problems?
More than anything though, I’ve found the news of progress in the AI field to be a major source of stress. The recent drops in Metaculus estimates of how far we are from AGI have been particularly concerning. And very few people outside of this tiny almost cult-like community of AI safety people even seem to understand the unbelievable level of danger we are in right now. It often feels like there are no adults anywhere; there is only this tiny little island of sanity amidst a sea of insanity.
I understand how people working on AI safety deal with the problem; they at least can actively work on the problem. But how about the rest of you? If you don’t work directly on AI, how are you dealing with these shrinking timelines and feelings of existential pointlessness about everything you’re doing? How are you dealing with any anger you may feel towards people at large AI orgs who are probably well-intentioned but nonetheless seem to be actively working to increase the probability of the world being destroyed? How are you dealing with thoughts that there may be less than a decade left until the world ends?
It seems like there is likely a massive inefficiency in the stock market right now in that the stocks of companies likely to benefit from AGI are massively underpriced. I think the market is just now starting to wake up to how much value could be captured by NVIDIA, TSMC and some of the more consumer facing giants like Google and Microsoft.
If people here actually believe that AGI is likely to come sooner than almost anyone expects and have a much bigger impact than anyone expect, it makes sense to buy these kind of stocks because they are likely underpriced right now.
In the unlikely event that AGI goes well, you’ll be one of the few who stand to gain the most from the transition.
I basically already made this bet to a very limited degree a few months ago and am currently up about 20% on my investment. It’s possible of course that NVIDIA and TSMC could crash, but that seems unlikely in the long run.
I think it’s time for more people in AI Policy to start advocating for an AI pause.
It seems very plausible to me that we could be within 2-5 years of recursively self-improving AGI, and we might get an AGI-light computer virus before then (Think ChaosGPT v2).
Pausing AI development actually seems like a pretty reasonable thing to most normal people. The regulatory capacity of the US government is the most functional piece, and bureaucrats put in charge of regulating something love to slow down progress.
The hardware and software aspects need to be targeted. There should be strict limits placed on training new state-of-the-art models and a program to limit sales of graphics cards and other hardware that can train the latest models.
This may seem like small peanuts compared to AI ending the world, but I think it will be technically possible to de-anonymize most text on the internet within the next 5 years.
Analysis of writing style and a single author’s idiosyncracies has a long history of being used to reveal the true identity of anonymous authors. It’s how the Unabomber was caught and also how JK Rowling was revealed as the author of The Cuckoo’s Calling.
Up until now it was never really viable to perform this kind of analysis at scale. Matching up the authors of various works also required a single person to have read many of the author’s previous text.
I think LLMs are going to make textual fingerprinting at a global scale possible within the next 5 years (if not already). This in turn implies that any archived writing you’ve done under a pseudonym will be attributable to you.
If we are in a simulation, it implies an answer to the question of “Why do I exist?”
Suppose the following assumptions are true:
The universe is a construct of some larger set of simulations designed by some meta-level entity who is chiefly concerned with the results of the simulation
The cost of computation to that entity is non-zero
If true, these assumptions imply a specific answer to the question “Why do I exist?” Specifically, it implies you exist because you are computationally irreducible.
By computationally irreducible, I mean that the state of the universe cannot be computed in any manner more efficient than simulating your life.
If it could, and the assumptions stated above hold true, it seems extremely likely that the simulation designer would have run a more efficient algorithm capable of producing results.
Perhaps this argument is wrong. It’s certainly hard to speculate about the motivations of a universe-creating entity. But if correct, it implies a kind of meaning for our lives: there’s no better way to figure out what happens in the simulation than you living your life. I find that to be a strangely comforting thought.
FTX has just collapsed; Sam Bankman Fried’s net worth probably quite low
Huge news from the crypto world this morning: FTX (Sam Bankman Fried’s company and the third largest crypto exchange in the world) has paused customer withdrawals and announced it is entering negotiations with Binance to be acquired. The rumored acquisition price is $1.This has major implications for the EA/Rationalist space, since Sam is one of the largest funders of EA causes. From what I’ve read his net worth is tied up almost entirely in FTX stock and its proprietary cryptocurrency, FTT.I can’t find a source right now, but I think Sam’s giving accounted for about a third of all funding in the EA space. So this is going to be a painful downsizing.The story of what happened is complicated. I’ll probably write something about it later.Just read this: https://forum.effectivealtruism.org/posts/yjGye7Q2jRG3jNfi2/ftx-will-probably-be-sold-at-a-steep-discount-what-we-know
Does anyone have a good method to estimate the number of COVID cases India is likely to experience in the next couple of months? I realize this is a hard problem but any method I can use to put bounds on how good or how bad it could be would be helpful.