Fair enough, thanks for explaining! Probably some of what I’m worried about can be mitigated by careful naming & descriptions. (e.g. I suspect you weren’t considering a literal “LLM slop” react, but if you were, I think something more gently and respectfully worded could be much less unpleasant to receive while conveying just as much useful information)
tslarm
Are you sure it’s good to provide confrontational/insulting/dismissive reacts? I think they give users an easy way to snipe at someone we disagree with or dislike, without providing any support for our criticism and without putting ourselves on the line in any way. (Yes, reacts can be downvoted, but this isn’t the same as making a comment that can be voted on and replied to.)
In effect, a harsh react is an asymmetrical, no-effort tool for making another user look or feel bad, and I don’t see why it’s necessary. If we don’t want to engage, we can always just downvote; if we want to provide more information than a downvote can convey, we can put in the small amount of effort required to write a brief reply.
Upvoted, but also I’m curious about this:
If you tell them how to reason, they usually just throw these suggestions out and reason the way RL taught them to reason (and sometimes OpenAI also threatens to ban you over trying to do this).
Can you elaborate on the parenthetical part?
I think this depends on the assumptions that a) ordinary people have a considered belief that insect suffering doesn’t matter, and b) this belief depends on the belief that insects don’t suffer (much).
If most people just haven’t given any serious thought to insect suffering, and the main reason they tend to act like it doesn’t matter is because that’s the social default, then their numerical estimates (which are quite arbitrary, but plausibly based on more thought than they’ve ever previously given to the question) might be at least as good a guide to the ground truth as their prior actions are.
And if someone doesn’t care about insect suffering, not because they’re confident that insects don’t experience non-trivial suffering but because they simply don’t care about insects (perhaps because they don’t instinctively feel empathy for insects, they find insects annoying, they know insects spread disease, etc.), then the apparent conflict between their indifference and their estimates is extremely weak evidence against the accuracy of their estimates.
The first half of this seems true (the estimates are quite arbitrary), but I don’t get why you’re confident about the second half. What makes your estimate of the “appropriately sized numbers” less arbitrary and more plausible?
Forgive the nitpick, but I think the standard definition of “weakly solved” requires known-optimal strategies from the starting position, which don’t exist for chess. It’s still not known for sure that chess is a draw—it just looks very likely.
They’re free to quit in the sense that nobody will stop them. But they need money for food and shelter. And as far as moral compromises go, choosing to be a cog in an annoying, unfair, but not especially evil machine is a very mild one. You say you don’t expect the shouting to do any good, so what makes it appropriate? If we all go around yelling at everyone who represents something that upsets us, but who has a similar degree of culpability to the gate attendant, we’re going to cause a lot of unnecessary stress and unhappiness.
IMO it’s unclear what kind of person would be influenced by this. It requires the reader to a) be amenable to arguments based on quantitative probabilistic reasoning, but also b) overlook or be unbothered by the non sequitur at the beginning of the letter. (It’s obviously possible for the appropriate ratio of spending on causes A and B not to match the magnitude of the risks addressed by A and B.)
I also don’t understand where the numbers come from in this sentence:
In order to believe that AI risk is 8000 times less than military risk, you must believe that an AI catastrophe (killing 1 in 10 people) is less than 0.001% likely.
“If the accused is in power, increase the probability estimate” is not how good epistemics are achieved.
It is when our uncertainty is due to a lack of information, and those in power control the flow of information! If the accusations are false, the federal government has the power to convincingly prove them false; if the accusations are true, it has the power to suppress any definitive evidence. So the fact that we haven’t seen definitive evidence in favour of the allegations is only very weak evidence against their veracity, whereas the fact that we haven’t seen definitive evidence against the allegations is significant evidence in favour of their veracity.
The Krome thing is all rumor
I don’t have evidence against
If the truth is hard to determine, I think that in itself is very worrying. When you have vulnerable people imprisoned and credible fears that they are being mistreated, any response from those in power other than transparency is a bad sign. Giving them the benefit of the doubt as long as they can prevent definitive evidence from coming out is bad epistemics and IMO even worse politics (not in a party-political sense; just in a ‘how to disincentivise human rights abuses’ sense).
Can you elaborate a bit? Personally, I have intuitions on the hard problem and I think conscious experience is the only type of thing that matters intrinsically. But I don’t think that’s part of the definition of ‘conscious experience’. That phrase would still refer to the same concept as it does now if I thought that, say, beauty was intrinsically valuable—or even if I thought conscious experience was the only thing that didn’t matter.
So it doesn’t make much sense to value emotions
I think this is a non sequitur. Everything you value can be described as just <dismissive reductionist description>, so the fact that emotions can too isn’t a good argument against valuing them. And in this case, the dismissive reductionist description misses a crucial property: emotions are accompanied by (or identical with, depending on definitions) valenced qualia.
In this case, everybody seems pretty sure that the price is where it is because of the actions of a single person who’s dumped in a very large amount of money relative to the float.
I think it’s clear that he’s the reason the price blew out so dramatically. But it’s not clear why the market didn’t ‘correct’ all the way back (or at least much closer) to 50⁄50. Thirty million dollars is a lot of money, but there are plenty of smart rich people who don’t mind taking risks. So, once the identity and (apparent) motives of the Trump whale were revealed, why didn’t a handful of them mop up the free EV?
That’s not a rhetorical question; I’m interested in your answer and might be convinced by it. But right now I don’t see sufficient reason to be confident that the market is still badly distorted, rather than having legitimately settled on ~60/40.
Can’t this only be judged in retrospect, and over a decent sample size? If all the markets did was reflect the public expert consensus, they wouldn’t be very useful; the possibility that they’re doing significantly better is still open.
(I’m assuming that by “every other prediction source” you mean everything other than prediction/betting markets, because it sounds like Polymarket is no longer out of line with the other markets. Betfair is the one I keep an eye on, and that’s at 60⁄40 too.)
Code by Charles Petzold. It gives a ground-up understanding of how computers actually work, starting slowly and without assuming any knowledge on the reader’s part. It’s basically a less textbooky alternative to The Elements of Computing Systems by Nisan and Schocken, which is great but probably a bit much for a young kid.
Meanwhile hedonic utilitarianism fully bites the bullet, and gets rid of every aspect of life that we value except for sensory pleasure.
I think the word ‘sensory’ should be removed; hedonic utilitarianism values all pleasures, and not all pleasures are sensory.
I’m not raising this out of pure pedantry, but because I think this phrasing (unintentionally) plays into a common misconception about ethical hedonism.
Can you elaborate on why that might be the case?
It’s based on a scenario described by Derek Parfit in Reasons and Persons.
I don’t have the book handy so I’m relying on a random pdf here, but I think this is an accurate quote from the original:
Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Either a blush, or my tone of voice, always gives me away. Suppose, finally, that I know myself to be never self-denying. If you drive me to my home, it would be worse for me if I gave you the promised reward. Since I know that I never do what will be worse for me, I know that I shall break my promise. Given my inability to lie convincingly, you know this too. You do not believe my promise, and therefore leave me stranded in the desert. This happens to me because I am never self-denying. It would have been better for me if I had been trustworthy, disposed to keep my promises even when doing so would be worse for me. You would then have rescued me.
(It may be objected that, even if I am never self-denying, I could decide to keep my promise, since making this decision would be better for me. If I decided to keep my promise, you would trust me, and would rescue me. This objection can be answered. I know that, after you have driven me home, it would be worse for me if I gave you the promised reward. If I know that I am never self-denying, I know that I shall not keep my promise. And, if I know this, I cannot decide to keep my promise. I cannot decide to do what I know that I shall not do. If I can decide to keep my promise, this must be because I believe that I shall not be never self-denying. We can add the assumption that I would not believe this unless it was true. It would then be true that it would be worse for me if I was, and would remain, never self-denying. It would be better for me if I was trustworthy.)
Got it, thanks! For what it’s worth, doing it your way would probably have improved my experience, but impatience always won. (I didn’t mind the coldness, but it was a bit annoying having to effortfully hack out chunks of hard ice cream rather than smoothly scooping it, and I imagine the texture would have been nicer after a little bit of thawing. On the other hand, softer ice cream is probably easier to unwittingly overeat, if only because you can serve up larger amounts more quickly.)
I think two-axis voting is a huge improvement over one-axis voting, but in this case it’s hard to know whether people are mostly disagreeing with you on the necessary prep time, or the conclusions you drew from it.
Isn’t this circular? What counts as “you” is precisely what’s at issue here. (If I’m missing the point, maybe you can make your position more concrete, e.g. by explaining how it resolves some controversial cases.)