I call these people “credit takers” and at some point I had a realization that I was one of them. Basically I was teaching some people programming, then some became successful and I felt good about it, but then remembered that their talent was noticeable from the start and my teaching might not have made much difference, I was just credit-taking.
cousin_it(Vladimir Slepnev)
Then people should be asked before the fact: “if you upload code to our website, we can use it to train ML models and use them for commercial purposes, are you ok with that?” If people get opted into this kind of thing silently by default, that’s nasty and might even be sue-worthy.
Mechanically, an opt-out would be very easy to implement in software. One could essentially just put a line saying
I’m not sure it’s so easy. Copilot is a neural network trained on a large dataset. Making it act as if a certain piece of data wasn’t in the training set requires retraining it, and it needs to happen every time someone opts out.
At some point I hoped that CFAR would come up with “rationality trials”, toy challenges that are difficult to game and transfer well to some subset of real world situations. Something like boxing, or solving math problems. But a new entry in that row.
Without nanotech or anything like that, maybe the easiest way is to manipulate humans into building lots of powerful and hackable weapons (or just wait since we’re doing it anyway). Then one day, strike.
Edit: and of course the AI’s first action will be to covertly take over the internet, because the biggest danger to the AI is another AI already existing or being about to appear. It’s worth taking a small risk of being detected by humans to prevent the bigger risk of being outraced by a competitor.
How can photonics work without matter? I thought the problem was that you couldn’t make a switch, because light waves just pass through each other (the equations are linear, so the sum of two valid waves is also a valid wave).
Sethares’ theory is very nice: we don’t hear “these two frequencies have a simple ratio”, we hear “their overtones align”. But I’m not sure it is the whole story.
If you play a bunch of sine waves in ratios 1:2:3:4:5, it will sound to you like a single note. That perceptual fusion cannot be based on aligning overtones, because sine waves don’t have overtones. Moreover, if you play 2:3:4:5, your mind will sometimes supply the missing 1, that’s known as “missing fundamental”. And if you play some sine waves slightly shifted from 1:2:3:4:5, you’ll notice the inharmonicity (at least, I do). So we must have some facility to notice simple ratios, not based on overtone alignment. So our perception of chords probably uses this facility too, not only overtone alignment.
Hmm, this seems wrong but fixable. Namely, exp(A) is close to (I+A/n)^n, so raising both sides of det(exp(A))=exp(tr(A)) to the power of 1/n gives something like what we want. Still a bit too algebraic though, I wonder if we can do better.
Interesting, can you give a simple geometric explanation?
Yup, determinant is how much the volume stretches. And trace is how much the vectors stay pointing in the same direction (average dot product of v and Av). This explains why trace of 90 degree rotation in 2D space is zero, why trace of projection onto a subspace is the dimension of that subspace, and so on.
Very cool that you’re thinking about this. I’ve been in a bit of funk since the news about Cogent, Lumen and LINX. It’s good to hear that not everyone in the West subscribes to “bolt the door from outside”.
Right now there’s indeed an exodus of young qualified people from Russia. The easiest path goes to countries that are visa-free for Russians, like Armenia or Argentina.
Ukrainians have wanted to join the EU for years, it was one of the main points of the Euromaidan. Most in the EU were lukewarm to it, but now because of the war there are huge pro-Ukraine demonstrations in every European capital.
If everyone acts rationally, the result will be Ukraine growing closer to the EU, Russia becoming more isolated, and no WW3. But Russia isn’t acting rationally, I’m losing count of distinct stupid things it has done since Feb 21. Extrapolating that stupidity into the future makes me think that WW3 is quite possible.
Can you describe what changed / what made you start feeling that the problem is solvable / what your new attack is, in short?
I think the acoustic has a better sound, but the electric one has more groove.
“You’re scratching your own moral-seeming itches. You’re making yourself feel good. You’re paying down imagined debts that you think you owe, you’re being partial toward people around you. Ultimately, that is, your philanthropy is about you and how you feel and what you owe and what you symbolize. My philanthropy is about giving other people more of the lives they’d choose.”
“My giving is unintuitive, and it’s not always ‘feel-good,’ but it’s truly other-centered. Ultimately, I’ll take that trade.”
I think the Stirnerian counterargument would be that global utilitarianism wouldn’t spare me a red cent, because there are tons of people with higher priority than me, so basically you’re asking me to be altruist toward something that is overall egoist (or indistinguishable from egoist) toward me. Not saying I subscribe to this argument 100%, but what do you think of it?
I think this view is the opposite of true. My view is something more like “all men are created evil”. Animals are callous about how they kill or eat, and we start out as animals too. An animal doesn’t have to be hurt to hurt other animals. Neither does a human, there are tons of reports of rich kids who have everything and are callous anyway. It’s nature.
So where do we place the good? I think the good in us is the outer layer, the culture. Game-theoretic conventions like “don’t kill”, first coming from circumstantial necessity, and then we learn and internalize them because we have a capacity for learning and internalizing. If you wanna look for something innocent and nice, look in the cultures we acquire. Not in our inner nature or genes, hell no.
Can we have unbounded utilities, and lotteries with infinite support, but probabilities always go down so fast that the sum (absolutely) converges, no matter what evidence we’ve seen?
Stopped teaching. Now if someone says “my kid needs something to do, can you teach him programming” my answer is “no”. Those who want to program can come to me themselves and ask specific things.