# bideup

Karma: 348
• seems to me to be a crux between “man, we’re probably all going to die” and “we’re really really fucked”

Sorry, what’s the difference between these two positions? Is the second one meant to be a more extreme version of the first?

• What’s the difference between “Alice is falling victim to confusions/​reasoning mistakes about X” and “Alice disagrees with me about X”?

I feel like using the former puts undue social pressure on observers to conclude that you’re right, and makes it less likely they correctly adjudicate between the perspectives.

(Perhaps you can empathise with me here, since arguably certain people taking this sort of tone is one of the reasons AI x-risk arguments have not always been vetted as carefully as they should!)

• I learned maths mostly by teachers at school writing on a whiteboard, university lecturers writing on a blackboard or projector, and to a lesser extent friends writing on pieces of paper.

There was a tiny supplement of textbook-reading at school and large supplement of printed-notes-reading at university.

I would guess only a tiny fraction learn exclusively via typed materials. If you have any kind of teacher, how could you? Nobody shows you how to rearrange an equation by live-typing latex.

• In Texas Hold ’Em, the most popular form of poker, there is no drawing or discarding, just betting and folding.

This seems like strong evidence that those parts are where the skill lies — somebody came up with a version that removed the other parts, and everyone switched to it.

Not sure how that affects the metaphor. For me I think it weakened the punch, since I had to stop and remember that there exist forms of poker with drawing and discarding.

• Right, I understand it now, thanks. I missed the labels on the x axis.

• I found your bar chart more confusing than illuminating. Does it make sense to mark the bottom 20% of people, and those people’s 43% probability of staying in the bottom 20%, as two different fractions of the same bar? The 43% is 43% of the 20%, not of the original 100%.

• If many more people are extremely happy all the time than extremely depressed all the time, the bunch of people you describe would be managing their beliefs rationally. And indeed I think that’s probably the case.

• When you say you use a kanban-style system, does that just refer to the fact that there are columns that you drag items between, or does it specifically mean that you also make use of an ‘in progress’ column?

If so, do you have one for each ‘todo’ column, or what?

And do you have a column for the ‘capture’ aspect of GTD, or do you do something else for that?

• Are you interested in these debates in order to help form your own views, or convince others?

I feel like debates are inferior to reading people’s writings for the former purpose, and for the latter they deal collateral damage by making the public conversation more adversarial.

• I keep reading the title as Attention: SAEs Scale to GPT-2 Small.

• I think what I was thinking of is that words can have arbitrary consequences and be arbitrarily high cost.

In the apologising case, making the right social API call might be an action of genuine significance. E.g. it might mean taking the hit on lowering onlookers’ opinion of my judgement, where if I’d argued instead that the person I wronged was talking nonsense I might have got away with preserving it.

John’s post is about how you can gain respect for apologising, but it does have often have costs too, and I think the respect is partly for being willing to pay them.

• Words are a type of action, and I guess apologising and then immediately moving on to defending yourself is not the sort of action which signals sincerity.

• Explaining my downvote:

This comment contains ~5 negative statements about the post and the poster without explaining what it is that the commentor disagrees with.

As such it seems to disparage without moving the conversation forward, and is not the sort of comment I’d like to see on LessWrong.

• The second footnote seems to be accidentally duplicated as the intro. Kinda works though.

• “Not invoking the right social API call” feels like a clarifying way to think about a specific conversational pattern that I’ve noticed that often leads to a person (e.g. me) feeling like they’re virtuosly giving up ground, but not getting any credit for it.

It goes something like:

Alice: You were wrong to do X and Y.

Bob: I admit that I was wrong to do X and I’m sorry about it, but I think Y is unfair.

discussion continues about Y and Alice seems not to register Bob’s apology

It seems like maybe bundling in your apology for X with a protest against Y just doesn’t invoke the right API call. I’m not entirely sure what the simplest fix is, but it might just be swapping the order of the protest and the apology.

• Is it true that scaling laws are independent of architecture? I don’t know much about scaling laws but that seems surely wrong to me.

e.g. how does RNN scaling compare to transformer scaling

• Your example of a strong syllogism (‘if A, then B. A is true, therefore B is true’) isn’t one.

It’s instead of the form ‘If A, then B. A is false, therefore B is false’, which is not logically valid (and also not a Jaynesian weak syllogism).

If Fisher lived to 100 he would have become a Bayesian

Fisher died at the age of 72

———————————————————————————————————

Fisher died a Frequentist

You could swap the conclusion with the second premise and weaken the new conclusion to ‘Fisher died before 100’, or change the premise to ‘Unless Fisher lived to a 100 he would not become a Bayesian’.