cousin_it
Yeah, I was talking more about finding a real life group. Finding an online group is much less useful.
Wait, if we can be confused whether a property is perfect or imperfect—then why do we assert (in axioms 4 and 6) that some specific properties are perfect? What if they’re also impossible, like the perfect tuning?
It’s nice that we got to the notion of logical possibility though. It’s familiar ground to me.
Let’s talk for example about mathematical properties of musical intervals. When a major scale C D E F G A B is played on a just-intonation instrument, all pairwise ratios of frequencies are fairly simple: 2⁄3, 15⁄16, all that. All except the interval from D to F, which is an uglier 27⁄32, unpleasant both numerically and to the ear. This raises the tantalizing possibility of a perfect tuning: adjusting the frequencies a little bit so that all pairwise ratios are nice, not all except one. The property of a tuning being perfect can be described mathematically.
Unfortunately, it can also be shown mathematically that a perfect tuning can’t exist. What does that mean in light of your Axiom 3? Must there be a “possible world”, or “logically possible world”, where mathematics is different and a perfect tuning exists? Or is this property unworthy of being called perfect? But what if we weren’t as good at math, and hadn’t yet proved that perfect tuning is inachievable: would we call the property perfect then? What does your framework say about this example?
I guess this time I spoke too soon! Indeed if we talk about logical possibility, then we “only” need to prove that the imagined world isn’t contradictory in itself. Which is also hard, but easier than what I said.
Yeah. Or rather, I guess modal logic can describe the world—but only if you meet its very strict demands. For example, to say something is “possible”, one must prove the impossibility of finding a contradiction between the thing and all evidence known so far, to either the speaker or the listener. If that requirement is met, then modal logic will give the right answers, at least until new evidence comes along :-)
If the axiom refers to a notion of “possibility” purely within the misty abstract world of modal logic, then sure, I agree. But then the “God” whose existence is thus proved also resides in that misty world, not in ours. For the proof to pertain to our world, the notion of “possibility” in the axiom must correspond to the notion of possibility that we humans have. And understood that way, the axiom can be wrong, and is wrong.
Axiom 3 is wrong. If there are facts about what’s possible or not, then these facts must be proved; pleading that “surely it must at least be possible” doesn’t cut it. Surely the chicken must at least be white, but he ain’t. This flaw has been known for centuries, I tried to write a short explanation sometime ago too.
Do you think AI-empowered people / companies / governments also won’t become more like scary maximizers? Not even if they can choose how to use the AI and how to train it? This seems a super strong statement and I don’t know any reasons to believe it at all.
At 2:38 in this video. The whole series is worth spending a few days watching while neglecting food or sleep, if your sense of humor is anything like mine.
Good stuff! Loved the “upgrade your subscription” bit, it was like a funny version of SCP’s “data expunged”.
The first few lines were almost the Freeman’s Mind joke:
DWARKESH: Thanks for coming onto the podcast. It’s great to have you—
BAT: It’s great to have me. Yeah.
And the personality of the bat just interrupting Dwarkesh all the time was really nicely done, too.
The characteristic feature of the loser is to bemoan, in general terms, mankind’s flaws, biases, contradictions, and irrationality—without exploiting them for fun and profit
I don’t understand this quote. For example, let’s say I’m a loser bemoaning the fact that people are easy to scam. How should I exploit that for fun and profit? Scam people?
I think if you have a set of books and audiobooks that are a gentle ramp from your current level, you can basically spend an hour a day reading and listening with pretty low effort, and the other skills will grow automatically: human languages are a bit magical that way. But building a set of materials with a gentle enough ramp is the hard part.
I also disagree with the post (sorry!) There are two variables: what kind of practice you do, and how long you do it. I’ve long felt that the most efficient kind of practice for many skills is some kind of imitation or immersion, improving you at multiple dimensions at once: for example, for music it would be jamming rather than learning pieces, and for foreign language learning it would be listening to audiobooks rather than doing duolingo. So in that respect I agree with you. But when it comes to learning schedule, I’ve found that doing this kind of multidimentional practice for half an hour a day can lead to very fast improvement, because the brain uses the downtime to consolidate things. It’s almost magical how you come back to practice the next day and realize that you got better. There’s just no need to do full time immersion; if the method of practice is chosen right, the return per hour of practice will be much higher if you allow plenty of downtime between sessions, and you’ll also have time to do other things.
I always thought that the best depiction of a heroic woman is when Ripley goes to save Newt. It’s not about being super strong and confident, she knows she might die but she goes anyway.
It’s not quite parenthood though, as Newt wasn’t her daughter (same as Ellie wasn’t Joel’s daughter).
Some common distortions here include: … In many traditions, marriage isn’t primarily about two people’s happiness together
Marriage is just a mechanism though, people use it for different purposes. It’s true that marrying for some purposes is a bad sign, e.g. marrying for money. But marrying for happiness vs. marrying for kids (for example) both seem fine to me, neither is a “distortion”.
Yeah! It’s great that someone noticed it and spelled it out.
The same is true for music, at least in my experience. I’m in a group of friends who like to write songs and play them together. Well, their quick throwaway tunes almost always sound more fun to me than their high effort stuff. And they like my quick throwaway tunes more than my high effort stuff. It’s gone on like this for awhile.
Maybe that’s one reason why lots of practice is needed: to make my window of “fast, unedited work” longer.
Often, a talented musician vibing out a couple melodies + layers of accompaniment makes “better” video game music than that same musician spending ages to craft a rich and complex piece of classical music with developed themes.
I’d guess that some musicians are just naturally better at “folk” music (focused on repetition) while others are better at “serious” music (focused on development), and people often aspire to do something else than what they’re naturally talented at.
Heck, sometimes people don’t even like what they’re talented at! There have been people who were world-class at something but hated every minute of it, like Douglas Adams with writing.
I don’t believe that AI companies today are trying to build moral AIs. An actually moral AI, when asked to generate some slop to gunk up the internet, would say no. So it would not be profitable for the company. This refutes the “alignment basin” argument for me. Maybe the basin exists, but AI companies aren’t aiming there.
Ok, never mind alignment, how about “corrigibility basin”? What does a corrigible AI do if one person asks it to harm another, and the other person asks not to be harmed? Does the AI obey the person who has the corrigibility USB stick? I can see AI companies aiming for that, but that doesn’t help the rest of us.
Yeah, exactly this. When people get a lot of power, they very often start treating below them worse. So AIs that are trained on imitating people might also turn out like that. On top of that, I expect companies to tweak their AIs in ways that optimize for money, and this can also go bad when AIs get powerful. So we probably need AIs that are more moral than most people, and trained by organizations that don’t have a money or power motive.
I think this was in the Sequences, the notion of “optimization process”. Eliezer describes here how he realized this notion is important, by drawing a line through three points: natural selection, human intelligence, and an imaginary genie / outcome-pump device.