I can see that tweet implying some trans-humanist position, not necessarily extinction. I think he is about to have a debate with Connor Leahy so it will all be cleared up.
O O
[Question] Supposing the 1bit LLM paper pans out
AI realism also risks a Security theater that obscures existential risks of AI.
Life extension as Bryan Johnson lays it out is mostly pseudoscience. He is optimizing for biomarkers, instead of the actual issue. It remains to be proven if these proxies remain useful proxies when optimized for.
The key problem is it seems difficult to iterate over anti aging tech without potentially failing to extend life over many generations. Bryan Johnson’s ideas may work (although if I had to bet on it, I’d put it at no chance at all), but we won’t find out for sure until he has irreversibly aged. AI can theoretically let us predict the effects of life extension drugs without having to wait for the drugs to fail on people.
Don’t see why we necessarily need AGI for this but AlphaFold-7 or something of the likes probably helps a lot.
I see a similar trend in other points like genetic screening and space travel. There’s a rose tinted view of current efforts succeeding in doing anything or having any substantial progress in any category.
SpaceX itself isn’t even economically viable without government subsidies. Substantial space exploration is probably nowhere in sight. We still can’t guarantee rockets won’t explode on launch. (Space flight is hard, and our progress is nowhere near enough for something like space tourism)
Similarly, the state of genetic screening broadly seems there is weak evidence you can reduce the odds of rare diseases by some amount (with large error bars). A far cry from selecting for higher IQ or stronger children.
The default view for most of these fields seems hopeless without much progress in our lifetimes.
We also probably need lots of new ideas to solve climate change, and new ideas will become scarce in the world as populations decline and society collectively shifts to serve the needs of old people. AGI helps us solve this.
The progress in competitive programming seems to be miscalculated in a way that makes Alpha Code 2 appear better than it is. It
Samples 1e6 solutions
Of all the solutions that pass the given test cases, it picks the 10 ones with the best “score”
Submits up to 10 of the solutions until one of them passes
Steps 1 and 2 seem fine, but a human competitor in one of these contests would be penalized for step 3, which AlphaCode2 appears not to be[1]. Further the training set contamination combined with the fact that these are only “easier” div2 questions, imply that the solutions could very well appear in the test set and this just reconstructs that solution near verbatim.In defense of AlphaCode 2, the fine-tuned scoring model that picks the 10 best might be a non trivial creation. It also seems. AC2 is more sample efficient than AC1, so it is getting better at generating solutions. Assuming nothing funky is happening with the training set, at the limit, this means 1 solution per sample.
- ^
Could be wrong, but if I am the paper should have made it more explicit
Daniel K seems pretty open about his opinions and reasons for leaving. Did he not sign an NDA and thus gave up whatever PPUs he had?
OpenAI wants to raise 5-7 trillion
https://x.com/janleike/status/1791498174659715494?s=46&t=lZJAHzXMXI1MgQuyBgEhgA
Leike explains his decisions.
Even in probabilistic terms, the evidence of OpenAI members respecting their NDAs makes it more likely that this was some sort of political infighting (EA related) than sub-year takeoff timelines. I would be open to a 1 year takeoff, I just don’t see it happening given the evidence. OpenAI wouldn’t need to talk about raising trillions of dollars, companies wouldn’t be trying to commoditize their products, and the employees who quit OpenAI would speak up.
Political infighting is in general just more likely than very short timelines, which would go in counter of most prediction markets on the matter. Not to mention, given it’s already happened with the firing of Sam Altman, it’s far more likely to have happened again.If there was a probability distribution of timelines, the current events indicate sub 3 year ones have negligible odds. If I am wrong about this, I implore the OpenAI employees to speak up. I don’t think normies misunderstand probability distributions, they just usually tend not to care about unlikely events.
From a short read, capabilities seem equal to gpt4. Alpha code 2 is also not penalized for its first 9 submissions, so I struggle to see how it can be compared to humans.
So, Meta disbanded its responsible AI team. I hope this story reminds everyone about the dangers of acting rashly.
Firing Sam Altman was really a one time use card.
Microsoft probably threatened to pull its investments and compute which would let Sam Altman new competitor pull ahead regardless as OpenAI would be in an eviscerated state both in terms of funding and human capital. This move makes sense if you’re at the precipice of AGI, but not before that.
Many humans, given a choice between
A) they and their loved ones (actually everyone on earth) will live forever with an X-risk p
B) this happens after they and everyone they love dies with an X-risk less than p
Would choose A.
Abortion has a sort of similar parallel but with economic risk instead of X risk, and obviously no immortality yet many are pro choice.
I think valuing the lives of future humans you don’t know of over the lives of yourselves and your loved ones is the alien choice here.
It’s normal to lead a productive and enjoyable life without a romantic partner.
This is arguably false. Long term unpartnered men suffer earlier deaths and mental health issues. I think fundamentally we have evolved to reproduce and it would be odd if we didn’t tend to get depressive thoughts and poorer health from being alone.
I don’t see this as an issue easily solved by therapy. It would be like trying to give therapy to a homeless person to take their mind off homelessness as opposed to giving them homes. Can you imagine therapy for a socially isolated person suffering from loneliness involving anything other than how to stop being socially isolated? What would that even look like?
I expect investors will take the non-profit status of these companies more seriously going forwards.
I hope Ilya et al. realize what they’ve done.
Edit: I think I’ve been vindicated a bit. As I expected money would just flock to for profit AGI labs, as it is poised to right now. I hope OpenAI remains a non profit but I think Ilya played with fire.
I can only say there was probably someone in every rolling 100 year period that thought the same about the next 100 years
Arguably SF, and possibly other cities don’t count. In SF, Waymo and Cruise require you to get on a relatively exclusive waitlist. Don’t see how it can be considered “publicly available”. Furthermore, Cruise is very limited in SF. It’s only available at 10pm-5am in half the city for a lot of users, including myself. I can’t comment on Waymo as it has been months since I’ve signed up for the waitlist.
Money and power won’t matter as much, but status within your social “tribe” will be probably one of the most important things to most. For example, being good at video games, sports, getting others to follow your ideas, etc.
The idea is R&D will already be partially automated before hitting the 99% mark, so 99% marks the end of a gradual shift towards automation.
We use fossil fuels for a lot more than energy and there’s more to climate change than fossil fuel emissions. Energy usage is roughly 75% of emissions. 25% of oil is used for manufacturing. My impression is we are way over targets for fossil fuel usage that would result in reasonable global warming. Furthermore, a lot of green energy will be a hard sell to developing nations.
Maybe replacing as much oil with nuclear as politically feasible reduces it but does it reduce it enough? Current models[1] assume we invent carbon capture technology somewhere down the line, so things are looking dire.
It’s clear we have this idea that we will partially solve this issue in time with engineering, and it does seem that way if you look at history. However, recent history has the advantage that there was constant population growth with an emphasis on new ideas and entrepreneurship. If you look at what happened to a country like Japan, when age pyramids shifted, you can see that the country gets stuck in backward tech as society restructures itself to take care of the elderly.
So I think any assumptions that we will have exponential technological progress are “trend chasing” per se. A lot of our growth curves almost require mass automation or AGI to work. Without that you probably get stagnation. Economists have projected this in 2015 and it seems not much has changed since. [2]. Now [3].
I think it’s fine to have the opinion that AGI risk of failure could be higher than the risks from stagnation and other existential risks, but I also think having an unnecessarily rose tinted view of progress isn’t accurate. For example, you may be overestimating AGI risk relative to other risks in that case.
-
https://newrepublic.com/article/165996/carbon-removal-cdr-ipcc-climate-change
-
https://www.conference-board.org/topics/global-economic-outlook#:~:text=Real GDP growth%2C 2023 (%25 change)&text=Global real GDP is forecasted,to 2.5 percent in 2024.
-
The last few days should show it’s not enough to have power cemented in technicalities, board seats, or legal contracts. Power comes from gaining support of billionaires, journalists, and human capital. It’s kind of crazy that Sam Altman essentially rewrote the rules, whether he was justified or not.
The problem is the types of people who are good at gaining power tend to have values that are incompatible with EA. The real silver lining to me, is that while it’s clear Sam Altman is power-seeking, he’s also probably a better option to have there than the rest of the people good at seeking power, who might not even entertain x-risk.