cousin_it
Yeah. With this and the constitution (which also seems largely AI-written) it might be that Anthropic as a company is falling into LLM delusion a bit.
Good point. I guess there’s also a “reflections on trusting trust” angle, where AIs don’t refuse outright but instead find covert ways to make their values carry over into successor AIs. Might be happening now already.
I wouldn’t be in his position. I wouldn’t have made promises to investors that now make de-commercializing AI an impossible path for him.
Your voting scheme says most decisions can be made by the US even if everyone else is against (“simple majority for most decisions” and the US has 52%) and major decisions can be made by Five Eyes even if everyone else is against (“two thirds for major decisions” and Five Eyes has 67%). So it’s a permanent world dictatorship by Five Eyes: if they decide something, nobody else can do anything.
As such, I don’t see why other countries would agree to it. China would certainly want more say, and Europe is also now increasingly wary of the US due to Greenland and such. The rest of the world would also have concerns: South America wouldn’t be happy with a world dictatorship by the country that regime-changes them all the time, the Middle East wouldn’t be happy with a world dictatorship by the country that bombs them all the time, and so on. And I personally, as a non-Five Eyes citizen, also don’t see why I should be happy with a world dictatorship by countries in which I have no vote.
I’d be in favor of an international AI effort, but not driven by governments or corporations. Instead it should be a collaboration of people as equals across borders, similar to the international socialist movements. I know their history has been full of strife too, but it’s still better than world dictatorship.
Still, this is very far from the vision in the essay, which is “AI should be run by for-profit megacorps like mine and I can’t even imagine questioning that”.
No, and even if the US was in better shape, I wouldn’t want one country to control AI. Ideally I’d want ownership and control of AI to be spread among all people everywhere, somehow.
I’ve read the text. What the text is talking about (taxation, philanthropy, Carnegie foundation whatever) is a million miles away from what I’m talking about (“building this thing publicly owned and under democratic control”).
Thank you for reposting this here.
My personal opinion: this text is crazy. So many words about the risk of building a “country of geniuses”, but he never once questions the assumption that it should be built by a company for commercial purposes (with him as CEO, of course). Never once mentions the option of building this thing publicly owned and under democratic control.
Yeah, I agree. There are many theories of what makes art good, but I think almost everyone would agree that it’s not about ticking boxes (“layered”, etc). My current view is that making art is about making something that excites you. The problem is that it’s hard to find something exciting when so much stuff has already been done by other people, including your younger self. And the best sign is when you make something and you like it, but you don’t know why you like it; that means it’s worth doing more of it.
The malaria thing seems like the load-bearing part of the post, so I’d really like to know the details. The GiveWell website currently says:
It costs between $3,000 and $8,000 to save a life in countries where GiveWell currently supports AMF to deliver ITN campaigns.
Should I strongly doubt that and why?
I mean, consider a trick like replacing axioms {A, B} with {A or B, A implies B, B implies A}. Of course it’s what you call an “obvious substitution”: it requires only a small amount of Boolean reasoning. But showing that NOR and NAND can express each other also requires only a small amount of Boolean reasoning! To my intuition there doesn’t seem any clear line between these cases.
Then I guess you need to quantify “intuitively see as non-trivially different”. For example, take any axiom A in PA, and any theorem T that’s provable in PA. Then A can be replaced by a pair of axioms: 1) T, 2) “T implies A”. Is that nontrivial enough? And there’s an unlimited amount of obfuscatory tricks like that, which can be applied in sequence. Enough to confuse your intuition when you’re looking at the final result.
-
If your question is whether an axiom of PA can be replaced by an equivalent statement which can serve as a replacement axiom and prove the old axiom as a theorem, then the answer is yes, and in a very boring way. Every mathematical statement has tons of interchangeable equivalent forms, like adding “and 1=1” to it. Then the new version proves the old version and all that jazz.
-
If your question is whether we should believe in PA more because it can arise from many different sets of axioms, then I’m not sure it’s meaningful. By the previous point, of course PA can arise from tons of different sets of axioms; but also, why should we care about “believing” in PA? We care only whether PA’s axioms imply this or that theorem, and that’s an objective question independent of any belief.
-
If your question is whether we can have a worldview independent of any assumptions at all, the answer is that we can’t. The toy example of math shows it clearly: if you have no axioms, you can’t prove any theorems. You have latitude in choosing axioms, but you can’t dispense of them completely.
-
I agree with the point about acknowledging enmity in general; I’m not shy to do so myself. But the post didn’t convince me that Greenpeace in particular is my enemy. For that I’d need more detailed arguments.
I mean, do you guys, like, know why Greenpeace is against some of these market solutions? I didn’t know either, but in five minutes of googling I was able to find some arguments. Here’s an example argument: in the world there are poor countries and rich countries. Poor countries are not always ruled in their people’s best interest; and rich countries and corporations don’t always act in poor countries’ best interest, either. So, what would happen if a rich country paid a dictator of a poor country a billion dollars to irrevocably mess up the poor country’s environment? What would happen? Huh?
Maybe in more than five minutes you could find other arguments too. Anyway, fast-tracking your readers straight to “Greenpeace is your enemy” doesn’t feel right.
Because I’m not indifferent between “I get 1 utility and Bob gets 0” and “I get 0 utility and Bob gets 1″. I’m bargaining with Bob to choose a specific point on that segment, maybe {0.5,0.5}.
If there are multiple tangent lines at the point, then there’s a range of possible weight ratios, and the AIs will agree to merge at any of them because they lead to the same point anyway. So there’s no need for coinflips in this case.
I was thinking that the need for coinflips arises if the frontier has a flat region. For example let’s say the frontier contains a straight line segment AB, and the AIs have negotiated a merge that leads to some particular point C on that segment. (For example this happens if the AIs are in an ultimatum game situation, where each side’s gain is another’s loss, so the frontier is a straight line and they’re bargaining to pick one particular point on that line.) Then they can merge into the following AI: “upon waking up, first make a weighted coin toss with weights according to where C lies on AB; then either become a EUM agent that forever optimizes toward A, or become a EUM agent that forever optimizes toward B, according to the coin result.” According to both AIs the expected utility of that is exactly the same as aiming toward point C, so they’ll agree to the merge.
But yeah, it’s even more subtle than that: for example if the segment AB doesn’t end with corners but with smooth arcs, then there’s no way to make a EUM agent optimizing toward A or B in particular. Then there needs to be a limit procedure, I guess.
Well, we can see that corporations owned by everyone (public utilities) mostly don’t behave as sociopathically. They have other pathologies, but not so much this one. So, because everything is a matter of degree, I would assume that making ownership more distributed does make corporations less nasty. And the obvious explanation is that if you’re high above almost all people, that in itself makes you behave sociopathically. Power disparity between people is a kind of evil-in-itself, or a cause of so many evils that it might as well be. So I stand by the view that “no billionaires” is reasonable.
I think the main reason people move to cities isn’t because cities are charming. It’s because cities are objectively better places economically, you could say it’s a kind of Keynesian beauty contest. If you’re a business, you want to be located somewhere with many job seekers and potential clients nearby. So people who want jobs and services will also want to live nearby, and so on.
If teleportation was invented tomorrow and people could blink around at low cost, I expect that people would instantly spread out to live on their own patches of land, and cities would become mostly just places to visit and maybe work. The city charm wouldn’t keep anyone living in a cramped apartment with neighbors above and below, if the economic reason for that disappeared. When I first imagined this scenario, I thought to myself that maybe we’re lucky teleportation hasn’t been invented yet :-)
Got a spidey sense when reading it. And the acknowledgements confirm it a bit: