I second questions 1, 5, and 6 after listening to the Dwarkesh interview.
WilliamKiely
Re 6: at 1:24:30 in the Dwarkesh podcast Leopold proposes the US making an agreement with China to slow down (/pause) after the US has a 100GW cluster and is clearly going to win the race to build AGI to buy time to get things right during the “volatile period” before AGI.
(Note: Regardless of whether it was worth it in this case, simeon_c’s reward/incentivization idea may be worthwhile as long as there are expected to be some cases in the future where it’s worth it, since the people in those future cases may not be as willing as Daniel to make the altruistic personal sacrifice, and so we’d want them to be able to retain their freedom to speak without it costing them as much personally.)
I’d be interested in hearing peoples’ thoughts on whether the sacrifice was worth it, from the perspective of assuming that counterfactual Daniel would have used the extra net worth altruistically. Is Daniel’s ability to speak more freely worth more than the altruistic value that could have been achieved with the extra net worth?
Retracted, thanks.
Retracted due to spoilers and not knowing how to use spoiler tags.
Received $400 worth of bitcoin. I confirm the bet.
@RatsWrongAboutUAP I’m willing to risk up to $20k at 50:1 odds (i.e. If you give me $400 now, I’ll owe you $20k in 5 years if you win the bet) conditional on (1) you not being privy to any non-public information about UFOs/UAP and (2) you being okay with forfeiting any potential winnings in the unlikely event that I die before bet resolution.
Re (1): Could you state clearly whether you do or do not have non-public information pertaining to the bet?
Re (2): FYI The odds of me dying in the next 5 years are less than 3% by SSA base rates, and my credence is even less than that if we don’t account for global or existential catastrophic risk. The reason I’d ask to not owe you any money in the worlds in which you win (and are still alive to collect money) and I’m dead is because I wouldn’t want anyone else to become responsible for settling such a significant debt on my behalf.
If you accept, please reply here and send the money to this Bitcoin address: 3P6L17gtYbj99mF8Wi4XEXviGTq81iQBBJ
I’ll confirm receipt of the money when I get notified of your reply here. Thanks!
IMO the largest trade-offs of being vegan for most people aren’t health trade-offs, but they’re other things like the increased time/attention cost of identifying non-vegan foods. Living in a place where there’s a ton of non-vegan food available at grocery stores and restaurants makes it more of a pain to get food at stores and restaurants than it is if you’re not paying that close attention to what’s in your food. (I’m someone without any food allergies, and I imagine being vegan is about as annoying as having certain food allergies).
That being said, it also seems to me that the vast majority of people’s diets are not well optimized for health. Most people care about convenience, cost, taste, and other factors as well. My intuition is that if we took a random person and said “hey, you have to go vegan, lets try to find a vegan diet that’s healthier than your current diet” that we’d succeed the vast majority of the time simply because most people don’t eat very healthily. That said, the random person would probably prefer a vegan diet optimized for things other than just health more than a vegan diet optimized for just health.
I only read the title, not the post, but just wanted to leave a quick comment to say I agree that veganism entails trade-offs, and that health is one of the axes. Also note that I’ve been vegan since May 2019 and lacto-vegetarian since October 2017, for ethical reasons, not environmental or health or other preferences reasons.
It’s long (since before I changed my diet) been obvious to me that your title statement is true since a prior it seems very unlikely that the optimal diet for health is one that contains exactly zero animal products, given that humans are omnivores. One doesn’t need to be informed about nutrition to make that inference.
Probability that most humans die because of an AI takeover: 11%
This 11% is for “within 10 years” as well, right?
Probability that the AI we build doesn’t take over, but that it builds even smarter AI and there is a takeover some day further down the line: 7%
Does “further down the line” here mean “further down the line, but still within 10 years of building powerful AI”? Or do you mean it unqualified?
I made a visualization of Paul’s guesses to better understand how they overlap:
I took issue with the same statement, but my critique is different: https://www.lesswrong.com/posts/mnCDGMtk4NS7ojgcM/linkpost-what-are-reasonable-ai-fears-by-robin-hanson-2023?commentId=yapHwa55H4wXqxyCT
But to my mind, such a scenario is implausible (much less than one percent probability overall) because it stacks up too many unlikely assumptions in terms of our prior experiences with related systems.
You mentioned 5-6 assumptions. I think at least one isn’t needed (that the goal changes as it self-improves), and disagree that the others are (all) unlikely. E.g. Agentic, non-tool AIs are already here and more will be coming (foolishly). Taking a point I just heard from Tegmark on his latest Lex Fridman podcast interview, once companies add APIs to systems like GPT-4 (I’m worried about open-sourced systems that are as powerful or more powerful in the next few years), then it will be easy for people to create AI agents that uses the LLMs capabilties by repeatedly calling it.
This is the fear of “foom,”
I think the popular answer to this survey also includes many slow takeoff, no-foom scenarios.
And then, when humans are worth more to the advance of this AI’s radically changed goals as mere atoms than for all the things we can do, it simply kills us all.
I agree with this, though again I think the “changed” can be ommitted.
Secondly, I also think it’s possible that rather than the unaligned superintelligence killing us all in the same second like EY often says, that it may kill us off in a manner like how humans kill off other species (i.e. we know we are doing it, but it doesn’t look like a war.)
Re my last point, see Ben Weinstein-Raun’s vision here: https://twitter.com/benwr/status/1646685868940460032
Furthermore, the goals of this agent AI change radically over this growth period.
Noting that this part doesn’t seem necessary to me. The agent may be misaligned before the capability gain.
I wasn’t aware of this and would like more information. Can anyone provide a source, or report their agreement or disagreement with the claim?