A crux I have on the point about disincentivising developers from developing parts of their own land—how common is this? In my own country, the answer is—not at all, almost all development comes from the government building infrastructure, schools, etc. and developers buy land near where they know the government will build a metro line or whatever to leech off the benefits. Is the situation in the US that developers often buy big plots of cheap land and develop them with roads, hospitals, schools, to benefit from the rise in value of all the other land?
DusanDNesic
I think this view is quite US-centric as in fact most countries in the world do not include mineral rights with the land ownership (and yet, minerals are explored everywhere, not just US, meaning imo that profit motive is alive and well when you need to buy licences on top of the land, it’s just priced in differently). From Claude:
In a relatively small number of countries, private landowners own mineral rights (including oil) under their property. The United States is the most notable example, where private mineral rights are common through the concept of “mineral estate.” Even in the US though, there are some limitations and government regulations on extraction.
The vast majority of countries follow the “state ownership” model, where subsurface minerals including oil are owned by the government regardless of who owns the surface land. This includes:
Most of Europe (including UK, France, Germany)
Russia
China
Most Middle Eastern countries
Most African nations
Most Latin American countries
Canada (where the provinces generally own mineral rights)
Mexico (where oil specifically is constitutionally defined as state property)
Australia (where states own mineral rights)
Even in countries that technically allow private mineral ownership, state-owned companies often have exclusive rights to develop oil resources (like Saudi Aramco in Saudi Arabia or PEMEX in Mexico).
The US system of widespread private mineral rights is quite unique globally. There are a few other countries that have limited forms of private mineral rights, but none with the same extensive private ownership system as the US.
PIBBSS Fellowship 2025: Bounties and Cooperative AI Track Announcement
It sounds like you should apply for the PIBBSS Fellowship! (https://pibbss.ai/fellowship/)
Apply to the 2025 PIBBSS Summer Research Fellowship
Retrospective: PIBBSS Fellowship 2024
Excellent article, and helpful by introducing vocabulary that makes me think things which I was trying to understand. Perhaps it should be cross posted to EA Forum?
Future wars are about to look very silly.
I’m very sad I cannot attend at that time, but I am hyped about this and believe it to be valuable, so I am writing this endorsement as a signal to others. I’ve also recommended this to some of my friends, but alas UK visa is hard to get on such short notice. When you run it in Serbia, we’ll have more folks from the eastern bloc represented ;)
I think an important thing here is:
A random person gets selected for office. Maybe they need to move to the capital city, but their friends are still “back home.” Once they serve their term, they will want to come back to their community most likely. So lobbying needs to be able to pay to get you out of your community, break all your bonds and all that during your short stint in power. Currently, politicians slowly come to power and their social clique is used to being lobbies and getting rich and selling out ideals.
This would cut down on corruption a lot (see also John Huang’s comment https://www.lesswrong.com/posts/veebprDdTbq2Xmnyj/could-randomly-choosing-people-to-serve-as-representatives?commentId=NEtq8QtayXZY5a38J) and would undo a lot of the damage done from politicians not having to live normal lives under the current system.
Apologies, typo in the original, I do think it’s not charity to not increase publicity, the post was missing a “not”. Your response still clarified your position, but I do disagree—common courtesy is not the same as charity, and expecting it is not unreasonable. I feel like not publishing our private conversation (whether you’re a journalist or not) falls under common courtesy or normal behaviour rather than “charity”. Standing more than a 1 centimeter away from you when talking is not charity just because it’s technically legal—it’s a normal and polite thing to do, so when someone comes super close to my face when talking I have the right to be surprised and protest. Escalating publicity is like escalating intimacy in this example.
I feel like if someone internalized “treat every conversation with people I don’t know as if they may post it super publicly—and all of this is fair game”, we would lose a lot of commons, and your quality of life and discourse your would go down. I don’t think it’s “charity” to [EDIT: not] increase the level of publicity of a conversation, whether digital or in person. I think drawing a parallel with in person conversation is especially enlightening—imagine we were having a conversation in a room with CCTV (you’re aware it’s recorded, but believe it to be private). Me taking that recording and playing it on local news is not just “uncharitable”—it’s wrong in a way which degrades trust.
Amazing recommendation which I very much enjoyed, thanks for sharing!
Amazing write-up, thank you for the transparency and thorough work of documenting your impact.
[Epistemic status: somewhat informed speculation] TLDR: I do not believe China was a major threat source, recession makes it slightly less likely they will be one too. Conventional wars are more likely to happen, and their effect on AI development is uncertain.
I generally do not think China is a big of a threat in the AGI race as some others (notably Aschenbrenner) think. I think for AGI to be first developed in China, several factors need to be true: China has more centralized compute available than other countries, open models are near the frontier but not over the AGI limit, and China’s attitude towards developing AGI shifts (possibly due to race dynamics). I think for compute they are currently not on track, for frontier models there is a lag, and attitude is towards trying not to develop AGI, at least publicly and it seems also privately as far as we can glimpse. While the Chinese public is more techno-optimistic than the US, the CCP is leaning towards engineers rather than politicians, and senior advisors in AI are AI-pilled.
The current recession in China is due to a set of complex causes, but it’s a mix of politics and economics, and politics are quite slow to budge. I don’t want to get too much into it, but the banking sector is stretched thin with a lot of workers unable to pay back mortgages on apartments which were not completed due to real-estate developers building too much real estate and ending up holding the bag with many unsold apartments—with most of them being second apartments, so not necessities but “investments”. This is causing a loop of bankruptcies which is hard to stop, and has led to overall pessimism over the future. Lowering of the interest rates and making money available to banks has caused loans to be available, but people are skeptical to take them due to what they perceive as an uncertain future. CCP is likely to work on things which make the future more certain, large infrastructure projects such as bridges and dams as they have historically done, at least for some time. Nuclear power plants and hydroelectric dams definitely will qualify, but enormous compute clusters (using which chips? overpriced smuggled ones?) will likely not.
That is not to say that, if it seems like US is racing towards AGI and is reaping benefits from advanced AI, China will not put all the resources of a centralized government into catching up—and that can be quite a few resources since they can comandeer private enterprise or property to do so. If countries of the world play it sane, actually negotiate international limits, and meet China where they want to be met (CCP has many reasons not to want AGI) I do not expect China to be a threat to existence directly.
Recession is also more likely to make China want to blame bad economic results on foreign influence, and perhaps more likely to stoke international conflicts directly. I am personally not likely to want to live in a country bordering China in the next 10 years. How this will influence AGI is tough to predict—more resources spent on war means less on AI development, unless AI development is essential for a warfare edge, in which case we should expect a boom in AI development. The earlier the conflict happens, the less likely AI is to play a major role in warfare.
I agree with the spirit of what you are saying but I want to register a desire for “long timelines” to mean “>50 years” or “after 2100”. In public discourse, heading Yann LeCunn say something like “I have long timelines, by which I mean, no crazy event in the next 5 years”—it’s simply not what people think when they think long timelines, outside of the AI sphere.
Hi! Thanks for the kind words and for sharing your thought process so clearly! I am also quite happy to see discussions on PIBBSS’ mission and place in the alignment ecosystem, as we have been rethinking PIBBSS outbound comms since the introduction of the board and executive team.
Regarding the application selection process:
Currently (scroll down to see stages 1-4), it comes down to having a group of people who understand PIBBSS (in addition to the Board, this would be alumni, mentors, and people who have worked with PIBBSS extensively before) looking through CVs, Letters of motivation, and later work trials in the form of research proposals and research consolidation. After that, we do interviews and mentor-matching and then make our final decision. This has so far worked for our scope (as we grew in popularity, we also raised our bar, so the number of people passing the first selection stage has stayed the same through the past two years). So, it works, but if we were to scale the Fellowship (not obvious if we would like to do so) this system would need to become more robust. For Affiliates, the selection process is different, focusing much more on a proven track record of excellent research, and due to very few positions we can offer, it is currently a combination of word-of-mouth recommendations, and very limited public rounds. This connects with the project we started internally, “Horizon Scanning”, which makes reports on different research agendas and finds interesting researchers in the field which may make for great Affiliates. The first report should be out in the next month, so we will see how this interacts and how useful the reports are to the community (and to the fields which we hope to bridge with AI Safety). Again, as we scale, this will require rethinking.
Thank you again for the write-up and your support! Huge thanks also to all the commenters here; we really appreciate the thoughtful discussion!
A “Short-term Honesty Sacrifice”, “Hypocrisy Gambit”, something like that?
Quicky thoughts, not fully fledged, sorry.
Maybe it depends on the precise way you see the human take-over, but some benefits of Stalin over Clippy include:
Humans have to sleep, have biological functions, and have need to be validated and loved etc which is useful for everyone else.
Humans also have limited life span and their progeny has decent random chances of wanting things to go well for everyone.
Humans are mortal and posses one body which can be harmed if need be making them more likely to cooperate with other humans.