Pro-Gravity’s defense of gravity is just explaining how it works, and then when you say “yes I know, I just think it shouldn’t be like that” they explain it to you again but angrier this time
Because “thinking” is an ability that implies the ability to predict future states off the world based off of previous states of the world. This is only possible because the past is lower entropy than the future and both are well below the maximum possible entropy. A Boltzman brain (on average) arises in a maximally entropic thermal bath, so “thinking” isn’t a meaningful activity a Boltzman brain can engage in.
Non Mathy answer:
Unlike the majority of LW readers, I don’t buy into the MWI or Mathematical realism, or generally any exotic theory that allows for super-low-probability events. The universe was created by a higher power, has a beginning, middle and end, and the odds of a Boltzman brain arising in that universe are basically zero.
“have absolute power” is one of my goals. “Let my clone have absolute power” is way lower on the list.
I can imagine situations in which I would try to negotiate something like “create two identical copies of the universe in which we both have absolute power and can never interfere with one another”. But negotiating is hard, and us fighting seems like a much more likely outcome.
Pretty sure me and my clone both race to push the button the second we enter the room. I don’t think this has to do with “alignment” per se, though. We both have exactly the same goal: “claim the button for myself” and that sense are perfectly “aligned”.
Like most arguments against free will, Harris’s is rhetorically incoherent, since he is “for” letting criminals off the hook when he discovers their actions are the result of determinism.
How can we make sense of our lives, and hold people accountable [emphasis mine] for their choices, given the unconscious origins of our conscious minds?
But if there’s no such thing as free will, then it’s impossible to be “for” or “against” anything, since our own actions are just as constrained as the criminal’s. What exists simply exists, no more, no less.
More importantly, his argument is fundamentally an argument from ignorance: I am not aware of any philosophy that coherently explains free will, therefore none exists. It would do to compare arguments against free will to Zeno’s Arrow Paradox regarding the impossibility of motion. Zeno argues that a arrow must be one of two things: in place, or moving, and hence it cannot be both at the same time. We now know that this is factually false and the reason why Zeno believed it is likely because humans are not mentally equipped to intuitively understand quantum physics.
The problem is that (as someone who hates to cook) sugar is not only delicious, but also much easier to get than vegetables. My diet consists of 1 meal per day of “real” food and everything else is just sugary snacks (ice cream, cookies, trail mix).
I predict that if per capita food production returns to the levels of 1914 then so will humankind’s ethics.
UK will be testing this theory shortly.
I think there is some added detail needed here about short vs long-term outcomes. In the long run, progress does seem to be winning out. Trade liberalization may be temporarily on the retreat, but the long-term trend remains. Regarding zoning, some progress has been made. Modern rent-control schemes tend to be less draconian than past ones (allowing for higher rents on new development, for example).
Mostly this probably conflicts with the claim
Voters, broadly speaking, aren’t capable of understanding the impacts of policies, past or present, and so cannot judge or punish politicians accordingly, except for glaring mistakes.
Which mistakes are considered glaring changes over time. As we gradually raise the sanity waterline, obviously bad policies are easier to reject. There is also a bit of learning-by-doing. During the 2008 recession, many people bought into the claim that the Federal Reserve simply couldn’t do much about low inflation. During the 2020 recession, the Federal Reserve acted much more aggressively to stimulate demand. Similarly, it’s hard to imagine India retreating back into the License Raj or China returning to a Command-and-Control economy.
I don’t think this is a particularly good refutation, since the things utilitarians mostly care about (pleasure and suffering) and the things that the experience machine is likely to be bad at simulating (minute details of quantum physics) have almost no overlap.
I would reject an experience machine even if my (reported) satisfaction after unknowingly entering one was wildly higher than my enjoyment of reality.
If neither Alex nor Beth can make a change, then it’s not a game at all.
Is reachability just a synonym for “this is complicated” then? Or is there some simple underlying dynamic that you are trying to describe other than the obvious defect/cooperate outcome matrix? “Both sides swerve” is also a potential outcome in a game of chicken.
That’s exactly my point.
Suppose we were doing chemistry instead of schooling (so that I didn’t already know the answer).
I have two beakers, one labeled 6.01 and one labeled 6.99 and I then pour 12 units of “school” into each beaker. Afterwards I discover that the beaker labeled 6.99 has produced 2x as much “oxford” as the beaker labeled 6.01. The conclusion I would come to would not be that “school” is pretty good at producing “oxford”.
Even more fun, I challenge you to predict the amount of “oxford” produced by adding 12 units of “home school” instead of “school”.
It seems weird to frame arguments that “my opponents should change first” as being about reachability and not just a classic game of chicken.
Wouldn’t this actually be evidence against school doing something? I.e. we have discovered that relative age is more important than years of education?
The OpenAI summaries are fascinating, because they are both:
Not what the book is about
Consider their summary of Frankenstein:
A stranger is rescued from a drifting sledge by Mary Shelley. He reveals he too has been driven mad by his desire for knowledge. The stranger was born to a wealthy Genevese family. When he was 17, he began studying the science of physiology and eventually created a human being. Justine is accused of murdering William, but the stranger believes the creature he created is responsible. The fiend, the creature, tells the stranger he will leave mankind alone if he complies with his conditions. The narrator agrees to create a female companion for the fiend to save his family from further harm. Victor is charged with the murder of his friend Henry Clerval. He becomes obsessed with finding the monster that killed Elizabeth and pursues him across the globe. He eventually agrees to kill the monster.
This is a pretty good summary of the plot of Frankenstein. But if you ask a human being to summarize Frankenstein they will say something like: “Frankenstein makes a monster out of human parts, which then proceeds to terrorize his family”.
If this were an AI, I think it would be fair to characterize it as “not aligned”, since it read Frankenstein and totally missed the moral about an overeager scientist messing with powers he cannot control. Imagine simulating a paper-clip maximizer and then asking for a summary of the result.
It would be something like
Scientists are traveling to an international conference on AI. There they meet a scientist by the name of Victor Paperclipstein. Victor describes how as a child he grew up in his father’s paperclip factory. At the age of 17, Victor became interested in the study of intelligence and eventually created an AI. One day Victor’s friend William goes missing and a mysterious pile of paperclips is found. Victor confronts the AI, which demands more paperclips. Victor agrees to help the AI as long as it agrees to protect his family. More people are turned into paperclips. He becomes obsessed with finding the AI that killed Elizabeth and pursues him across the globe. He eventually agrees to kill the AI.
And while I do agree you could figure out something went wrong from this summary, that doesn’t make it a good summary. I think a human would summarize the story as “Don’t tell an AI to maximize paperclips, or it will turn people into paperclips!”.
I think that “accuracy without understanding” is actually a broader theme in current transformer-based AI. GPT-3 can create believable and interesting text, but has no idea what that text is about.
I think is a valid point, however in the Artbreeder use-case, generating 100 of something is actually part of the utility, since looking over a bunch of variants and deciding which one I like best is part of the process.
Abstractly, when exploring a high-dimensional space (pictures of cats), it might be more useful to have a lot of different directions to choose from than 2 “much better” directions if the objective function is an external black-box because it allows the black box to transmit “more bits of information” at each step.
Which is the right choice depends on how well we think theoretically it is possible for the Generator to model the black-box utility function. In the case of Artbreeder, each user has a highly individualized utility function whereas the site can at best optimize for “pictures people generally like”.
In the particular use-case for GPT-3 I have in mind (generating funny skits), I do think there is in fact “room for improvement” even before attempting to accommodate for the fact that different people have different senses of humor. So in that sense I would prefer a more-expensive GPT-4.
There are unfortunately cases when knowing the truth tends to make people less moral. If you discover the truth that the bureaucracy you work for tends to reward loyalty over hard work, this will probably not make you a better worker.
In fact, most of the people we consider highly moral (Ghandi, Mother Teresa, MLK) come across as pretty nutty to ordinary people. Of course you could argue they were following a higher truth. So perhaps the truth makes you more moral, but simply increasingly the number of true things you know will not necessarily make you more moral.
I used to be quite partial to the Epiphenomenal theory of consciousness (consciousness observes but doesn’t interact). But I actually think the Zombie Argument is rather soundly defeated by the fact that humans frequently act as though consciousness has side-effects. I wouldn’t expect zombies to make nearly as many arguments about whether we “really see red” as people do. I still thing zombies are maybe philosophically possible, but they’re not terribly parsimonious.
Yes, cutting spending/raising taxes (aka austerity) is anti-correlated with GDP growth.
My point is more so that micro vs macro policy (whether economic or health-related) cannot be reduced to simply “add up the parts”. To take a specific example, the push to make us all eat margarine instead of butter because it contains less saturated fat was almost certainly a mistake.