cousin_it
Can’t comment much on the trans stuff, but the main thing I wanna say that if you were lonely in high school, it wasn’t your fault. Don’t blame yourself for it. Society should do a much better job at making schools more accepting, or sorting kids to schools where they’ll be accepted, or at minimum just not forcing them to be there all the time. School does serve a purpose, but it’s still a miserable place for too many of the children confined in it, and that should be fixed.
In any case it’s great that you didn’t get get hung up on “improving social skills” somewhere that didn’t accept you, and instead found a group that accepted you. This is the only real way, I think. Next I’d encourage you to find more such groups and live a fun life between them, unless of course you’re doing that already :-)
Hmm. In all your examples, Albert goes against “goodness” and ends up with less “yumminess” as a result. But my point was about a different kind of situation: some hypothetical Albert goes against “goodness” and actually ends up with more “yumminess”, but someone else ends up with less. What do you think about such situations?
I agree that the distinction is important. However, my view is that a lot of what you call “goodness” is part of society’s mechanism to ensure cooperate/cooperate. It helps other people get yummy stuff, not just you.
You can of course free yourself from that mechanism, and explicitly strategize how to get the most “yumminess” for yourself without ending up broke/addicted/imprisoned/etc. If the rest of society still follows “goodness”, that leads to defect/cooperate, and indeed you end up better off. But there’s a flaw in this plan.
Thanks for the suggestion! I went over the corrigibility paper again. The “utility indifference” proposal in the paper is similar to mine. Then in section 4.2 it says that the proposal is vulnerable to a “managing the news” problem, and that spooked me into deleting my post for awhile.
Then I thought some more and restored the post again, because I no longer see why Bob would want to “manage the news”, e.g. ask Carol to bump into Alice and press the button if there’s a jam on Abbey Road and so on. My setup doesn’t seem to incentivize such things.
And then I read some more past discussions, and found that the “managing the news” problem was already solved back then in the same simple way, so my post is nothing new. Again back to drafts.
Hmm. Maybe not inoculation exactly, but the trope of creating an external enemy to achieve unity at home seems pretty popular (e.g. Watchmen, HPMOR) and it’s usually done by villains, so that doesn’t fill me with confidence.
I don’t think that violates free trade. Trump may think so, but that’s on him.
Putting a tariff on foreign cars certainly violates free trade, because it discriminates between domestic and foreign sellers. But requiring e.g. catalytic converters on all cars sold in your country, domestic and foreign alike, is okay. Banning leaded gasoline in your country is likewise okay, as long as you don’t discriminate on the origin of that gasoline. Countries should be allowed to pass laws like that.
ETA: looking at actual history, it seems different European countries banned leaded gasoline at different times, and the EU was already well established by then. Which seems to confirm my point.
I don’t agree with this. In my mind there’s a pretty clear line between good and evil in AI-related matters, it goes something like this:
-
If you don’t want anyone to have AI, you’re probably on the side of good.
-
If you want everyone equally to have AI, you may be also on the side of good. Though there’s a factual question how well this will work out.
-
But if you think that you and your band of good guys should have AI, but they and their band of bad guys shouldn’t—or at least, your band should get world domination first, because you’re good—then in my mind this crosses the line. It’s where bad things happen. And I don’t really make an exception if the “good guys” are MIRI, or OpenAI, or the US, or whichever group.
-
Isn’t the obvious solution to allow only early-screened eggs to be sold in Germany, no matter where they came from? And similar for other kinds of goods that can be made in unethical or polluting ways: require both domestic producers and importers to prove that the goods were produced ethically/cleanly/etc. And this doesn’t require a shared policy between many countries, each country can impose such rules on its own.
Hi Felix! I’ve been thinking about the same topics for awhile, and came to pretty much the opposite conclusions.
most humans, who do have some nonzero preference for being altruistic along with their other goals
No nononono. So many people making this argument and it’s so wrong to me.
The thing is: altruistic urges aren’t the only “nonzero urges” that people have. People also have an urge to power, an urge to lord it over others. And for a lot of people it’s much stronger than the altruistic urge. So a world where most people are at the whim of “nonzero urges” of a handful of superpowerful people will be a world of power abuse, with maybe a little altruism here and there. And if you think people will have exit rights from the whims of the powerful, unfortunately history shows that it won’t necessarily be so.
advanced AI can plausibly allow you to make cheap, ultra-destructive weapons… until we hit a point where a few people are empowered to destroy the world at the expense of everyone else
I think we’ll never be at a point where a handful of people can defeat the strongest entities. Bioweapons are slow; drone swarms can be stopped by other drone swarms. I can’t imagine any weapon at all that would allow a terrorist cell to defeat an army of equal tech level. Well, maybe if you have a nanotech-ASI in a test tube, but we’re dead before then.
It is however possible that a handful of people can harm the strongest entities. And that state of affairs is desirable. When the powerful could exploit the masses with impunity in the past, they did so. But when firearms got invented, and a peasant could learn to shoot a knight dead, the masses became politically relevant. That’s basically why we have democracy now: the political power of the masses comes from their threat-value. (Not economic value! The masses were always economically valuable to the powerful. Without threat-value, that just leads to exploitation. You can be mining for diamonds and still be a slave.) So the only way the masses can avoid a world of total subjugation to the powerful in the future is by keeping threat-value. And for that, cheap offense-dominant weapons are a good thing.
Even though the U.S has unbelievable conventional military superiority to North Korea, for instance, the fact that they have nuclear weapons means that we cannot arbitrarily impose our preferences about how North Korea should act onto them… Roughly speaking, you can swap out “U.S” and “North Korea” with “Optimizers” and “Altruists”.
Making an analogy with altruism here is strange. North Korea is a horrifying oppressive regime. The fact that they can use the nuke threat to protect themselves, and their citizens have no analogous “gun” to hold to the head of their own government, is a perfect example of the power abuse that I described above. A world with big actors holding all threat-power will be a world of NKs.
But I don’t believe that inequality is intrinsically problematic from a welfare perspective: it’s far more important that the people at the bottom meet the absolute threshold for comfort than it is for a society’s Gini coefficient to be lower.
There’s a standard response to this argument: namely, inequality of money always tries to convert itself into inequality of power, through lobbying and media ownership and the like. Those at the bottom may have comfort, but that comfort will be short lived if they don’t have the power to ensure it. The “Gini coefficient of power” is the most important variable.
So yeah, to me these all converge on a pretty clear answer to your question. Concentration of power, specifically of threat-power, offense-power, would be very bad. Spreading it out would be good. That’s how the world looks to me.
I agree this distinction is very important, thank you for highlighting it. I’m in camp B and just signed the statement.
It seems to me that such “unhealthiness” is pretty normal for labor and property markets: when I read books from different countries and time periods, the fear of losing one’s job and home is a very common theme. Things were easier in some times and places, but these were rare.
So it might make more sense to focus on reasons for “unhealthiness” that apply generally. Overregulation can be the culprit in today’s US, but I don’t see it applying equally to India in the 1980s, Turkey in the 1920s, or England in the early 1800s (these are the settings of some books on my shelf whose protagonists had very precarious jobs and housing). And even if you defeat overregulation, the more general underlying reasons might still remain.
What are these general reasons? In the previous comment I said “exploitation”, but a more neutral way of putting it is that markets don’t always protect one particular side. Markets are two-sided: there’s no law of economics saying a healthy labor market must be a seller’s market, while housing must be a buyer’s market. Things could just as easily go the other way. So if we want to make the masses less threatened, it’s not enough to make markets more healthy overall; we need to empower the masses’ side of the market in particular.
I think questions of power differences between the “elites” and the “masses” are very relevant to the AI transition, both as a model for intuitions and as a way to choose policy directions now, because AI will tend to amplify and lock-in these power differences and at some point it’ll get too late. For more context, see these comment threads of mine: 1, 2, 3, or this book review.
Yeah, I wouldn’t have predicted this response either. Maybe it’s a case of something we talked about long ago—that if a person’s “true values” are partly defined by how the person themselves would choose to extrapolate them, then different people can end up on very diverging trajectories. Like, it seems I’m slightly more attached to some aspects of human experience that you don’t care much about, and that affects the endpoint a lot.
Despite our superior technology, there are many things that Western countries could do in the past that we can’t today—e.g. rapidly build large-scale infrastructure, maintain low-crime cities, and run competent bureaucracies.
Why do you focus on these problems? I mean, sure, the average person in the West can feel threatened by crime, infrastructure decay, or incompetent bureaucracy. But they live every day under much bigger threats, like the threat of losing their job, getting evicted, getting denied healthcare, or getting billed or fee-d into poverty. These seem to be the biggest societal (non-health, non-family) threats for our hypothetical average person. And the common pattern in these threats isn’t decay or incompetence, it’s exploitation by elites.
That tweet doesn’t sound right to me. Or at least, to me there’s a simpler and more direct explanation of bubbles in terms of real resources, without having to mention money supply or central banks at all.
During a bubble, people are having fun because resources are being misallocated: misallocated to their fun. Some rich chumps are throwing their resources at something useless, like buying tulips. That bankrolls the good times for everyone else: the tulip-growers, the hairdressers that serve the tulip-growers and so on. But at some point the rich chumps realize that tulips aren’t that great, and that they burned their resources just to make a big bonfire and make everyone warm for awhile. When they realize that, the tulip growers will lose their jobs, and then the hairdressers who served them and so on. That’s the pain of the bubble ending, and it’s unavoidable, central bank or no.
(This thread is getting a bit long, and we might not be convincing each other very much, so hope it’s ok if I only reply with points I consider interesting—not just push-pull.)
With the concert pianist thing I think there’s a bit of type error going on. The important skill for a musician isn’t having fast fingers, it’s having something to say. Same as: “I’d like to be able to write like a professional writer”—does that mean anything? You either have things you want to write in the way that you want to write, or there’s no point being a writer at all, much less asking an AI to make you one. With music or painting it’s the same. There’s some amount of technique required, but you need to have something to say, otherwise there’s no point doing it.
So with that in mind, maybe music isn’t the best example in your case. Let’s take an area where you have something to say, like philosophy. Would you be willing to outsource that?
Well, there’s no point in asking the AI to make me good at things if I’m the kind of person who will just keep asking the AI to do more things for me! That path just leads to the consumer blob again. The only alternative is if I like doing things myself, and in that case why not start now. After all, Leonardo himself wasn’t motivated by the wish to become a polymath, he just liked doing things and did them. Even when then they’re a bit difficult (“chores”).
Anyway that was the theoretical argument, but the practical argument is that it’s not what’s being offered now. We started talking about outsourcing the task of understanding people to AI, right? That doesn’t seem like a step toward Leonardo to me! It would make me stop using a pretty important part of my mind. Moreover, it’s being offered by corporations that would love to make me dependent, and that have a bit of history getting people addicted to stuff.
There’s no “line” per se. The intuition goes something like this. If my value system is only about receiving stuff from the universe, then the logical endpoint is a kind of blob that just receives stuff and doesn’t even need a brain. But if my value system is about doing stuff myself, then the logical endpoint is Leonardo da Vinci. To me that’s obviously better. So there are quite a lot of skills—like doing math, playing musical instruments, navigating without a map, or understanding people as in your example—that I want to do myself even if there are machines that could do it for me cheaper and better.
This seems like one-shot reasoning though. If you extend it to more people, the end result is a world where everyone treats understanding people as a chore to be outsourced to AI. To me this is somewhere I don’t want to go; I think a large part of my values are chores that I don’t want to outsource. (And in fact this attitude of mine began quite a few steps before AI, somewhere around smartphones.)
I think this is very culturally dependent. For example, wars of conquest were considered glorious in most places and times, and that’s pretty much the ultimate form of screwing over other people. Or for another example, the first orphanages were built by early Christians, before that the orphans were usually disposed of. Or recall how common slavery and serfdom have been throughout history.
Basically my view is that human nature without indoctrination into “goodness” is quite nasty by default. Empathy is indeed a feeling we have, and we can feel it deeply (...sometimes). But we ended up with this feeling mainly due to indoctrination into “goodness” over generations. We wouldn’t have nearly as much empathy if that indoctrination hadn’t happened, and it probably wouldn’t stay long term if that indoctrination went away.