Who “should” to be wiser, A or B? The scenario implies A when introduced, but it seems like you meant B
ceba
That’s wrong. Rejection sensitivity can occur in ADHD individuals before they’re diagnosed. I have an alternative hypothesis that can explain this.
ADHD is characterised both by lifelong deficit and delayed development of executive function.
The delay is most pronounced during adolescence, when one’s peers are themselves developing their executive function. Executive function is essential for social function.
A socially dysfunctional experience (especially in adolescence) is one in which shame is strong and frequent. The purpose of shame is to force us to learn to avoid it.
What’s easier? Minimise risk of social rejection, or learn how to behave in order to be accepted? (Keeping in mind you have to do this with a brain that’s handicapped in these areas relative to your peers)
If a brain learns to be more sensetive to it, it makes one run faster and further from shame, thus minimising risk of social rejection.
“Do you see the problem?” Sarasti asked, advancing.
[...]
The vampire came after me, his face split into something that would have been a smile on anyone else. “Conscious of pain, you’re distracted by pain. You’re fixated on it. Obsessed by the one threat, you miss the other.”
I flailed. Crimson mist stung my eyes.
“So much more aware, so much less perceptive. An automaton could do better.”
[...]
Sarasti shook me. “Are you in there, Keeton?”
My blood splattered across his face like rain. I babbled and cried.
“Are you listening? Can you see?”
And suddenly I could. Suddenly everything clicked into focus. Sarasti wasn’t talking at all. Sarasti didn’t even exist anymore. Nobody did. I was alone in a great spinning wheel surrounded by things that were made out of meat, things that moved all by themselves. Some of them were wrapped in pieces of cloth. Strange nonsensical sounds came from holes at their top ends, and there were other things up there, bumps and ridges and something like marbles or black buttons, wet and shiny and embedded in the slabs of meat. They glistened and jiggled and moved as if trying to escape.
I didn’t understand the sounds the meat was making, but I heard a voice from somewhere. It was like God talking, and that I couldn’t help but understand.
“Get out of your room, Keeton,” it hissed. “Stop transposing or interpolating or rotating or whatever it is you do. Just listen. For once in your goddamned life, understand something. Understand that your life depends on it. Are you listening, Keeton?”
And I cannot tell you what it said. I can only tell you what I heard.
If your network activity becomes more critical, then the attractors disappear.
These words feel bad, would “moves toward criticality” work better?
I also did the sorts of worldbuilding exercises that I usually do when writing a novel. I spent time looking at maps of China, and using street-view to spend time going down roads.[10] (The township of Maxi, where much of the book is set, is a real place.) I generated random dates and checked the weather. I looked at budgets, salaries, import/export flows (especially GPUs), population densities, consumption trends, and other statistics, running the numbers to get a feel for how fast and how big various things are or would be.
have you written elsewhere about this process?
Tokens cost money, it’d be a lot cheaper to post-train on the document, wouldn’t it? How strongly would they want to keep this document private (if real)?
To an ML layman, it seems plausible that post-training on this document could improve its moral constitution. I’m thinking of prompt innoculation, emergent misalignment. But is that silly?
I am mostly uninterested in whether or not it’s pejorative. I think it’s descriptively accurate.
This discussion has implications on the validity of rationalism on its own terms, and also on how others should relate to rationalism.
The question is about what-is-true, but the reason we’re interested is what-is-good. This means we all have to be extra careful to keep our what-is-good boxes separate from our what-is-true boxes (I’m not accusing you of failing to do so).
I think that’s what you’re implying above, you’re saying “im not calling you names. I’m actually thinking about this!”, which is good. But what you said is dishonest.
It does have implications, and you are interested in them (for good reason)
Nevertheless, a worldview centered on preventing an imminent apocalypse is extremely easy to weaponize.
[...]
Cults are just religious sects that are new, horrible, or both.
My people have something called the Litany of Tarski, for just these situations. It is from one of our most ancient texts.
If [rationalsim is a cult], I want to believe that [rationalism is a cult]. If [rationalism is not a cult], I want to believe that [rationalism is not a cult]. Let me not become attatched to beliefs I do not want.
Should we look for a crux? I think I’ve got one.
How does rationalism affect one’s values? If you really wanted me to be rationalist, what might cause the most friction in converting me?
A large confounding factor in observations of rats is that LW is modally libertarian tech-adjacent American. So it might be difficult to distinguish rat from libertech.
Do you think those clusters of traits are distinguishable from each other? Or is libertarianism (for example) a rationalist value?
How would the values and behaviour of a 35yo Brazilian schoolteacher, compared to a 22yo English CS major, change, if they both started reading the sequences and found them compelling?What I’m pointing at:
Take a group of people with similar demographics, they’ll already share a chunk of values to start with. If you hang out with bunch of people long enough, you’ll converge on similar beliefs, because by sharing sources of information, you’ll have pretty similar perspectives on the world.
You think (?) that the movement prescribes a narrow set of values.
It does prescribe being effective (instrumental rationality), for which having accurate beliefs (epistemic rationality) is useful. The convergence of beliefs and perspectives is just what happens when any number of people associate closely.
The crux being: my “rationalist” draws a circle around epistemic and instrumental rationality, whereas your “rationalist” also includes a larger chunk of the common values and beliefs of rationalist people.
Thanks. “Dissolution” appears twice, once before and once after “integration” and “vipassana sickness”. Which definition is better?
Puzzle for you: Who thinks the latest ads for Gemini are good marketing and why?
AI generated meditating capybara: “Breath in (email summarisations)… Breath out (smart contextual replies)”
It summarises emails. It’s not exciting, it’s not technically impressive, and it isn’t super useful. It’s also pretty tone-deaf, a lot of people feel antipathy toward AI, and inserting it into human communication is the perfect way to aggravate this feeling.
“Create some crazy fruit creatures!”
Yes? And? I can only see this as directed at children. If so, where’s the… fun part? There’s nothing to engage with, no game loop. They’d get bored of it within minutes.
You want to show off how impressive your product is. People are saying there’s an AI bubble. So you REALLY want interesting, fun, novel, or useful applications for your tech.
It’s Google! They know about ads! They’ve lots of money! they CAN come up interesting, fun, novel, or useful applications for their tech.
Why didn’t they?!
Think of problems as Lean does: A problem state consists of some hypotheses/assumptions, a goal, and tactics we can apply to hypotheses to infer new statements. We seek to infer a statement with the type of the goal.
Some problems only require making the right local step at each successive problem state. That’s what makes them easy in some sense. Hard problems require determining (something about) the path before useful progress can be made. I think this is intuitive, if not I can give examples.
Complication: I have variable mental clarity and energy levels.
Completing a task well first requires understanding how a task breaks down into specific actions. Then the follow through only requires executing the local steps on path. The first part is “solve a hard problem”. Requires good mental clarity. The second requires cognitive work.
Any concrete action I take ends up being just local step in the immediate context’s problem state that doesn’t have any persistent effect on my ability to assess and resolve problem states, and diminishes my reserves of energy. Feel the difference between completing a task vs practising a technique: I want persistent effects that help me respond to challenges, and the work capacity to benefit from this ability.
Challenges like “Learn to use a new mode of public transport in an unfamiliar city”, “Prove Cauchy’s theorem for finite groups”, “how to pass this exam” are all difficult for the same reasons.
How to solve problems (read: do anything substantial) when clarity and capacity are variable/limited?
I have variable levels of cognitive function that I can’t predict. How can I learn/study, maintain routine, and make plans?
How do I improve my cognitive work capacity?
(I should have just said this, I didn’t mean to be leading sorry).
I’m going for: people who understand as well as you understand it, or such that you’re confident they could give a summary that you’d be ok with others reading.
You said above you’ve heard no strong counterarguments, it might be good to put that in proportion to the amount of people who you’re confident have a good grasp of your idea.
Obviously it has to start at 0, but if I were keeping track of feedback on my idea, I’d be keenly interested in this number.
How many people understand your argument?
Contemporary example meme: Clankerism. It doesn’t seek to deny AI moral patienthood, rather it semi-ironically uses racist rhetoric toward AI, denying their in-group status instead. Its fitness as meme is due mostly to the contrast between current capabilities and the anticipation (among the broader rationalist, tech-positive and e/acc spheres) of AI moral patienthood. This contrast makes the use or racist rhetoric toward them absurd: there’s no need to out-group something that doesn’t have moral patienthood.
However, I think this meme has the potential to be robust to capability-increase, see this example of youtuber JREG using clankerist rhetoric alongside genuine distress anticipating human displacement/disempowerment.
He’s not denying the possibility of AI capabilities surpassing human ones. He’s reacting with fear and hate (perhaps with some level of irony) toward human obsolescence.
In point 1, is identification with chimps an analogy for illustrative purposes, or a base case from which you’re generalising?
I infer StanislavKrym’s reply isn’t what you’re looking for. Could you explain why? It’s not obvious to me
I may have experienced this. I was reading a recent discussion about AGI doom, where Eliezer Yudkowsky and others were debating whether one could use aligned human-level AGI to solve alignment before strong ASI is developed.
After reading this thread, I went for a walk and thought about it.
The no arguments seemed straightforward and elegant in comparison to the yes arguments, which seemed contingent on on lots of little details.
Straightforward and elegant ideas often represent reality better in my experience. Is that why no seems more convincing?
Perhaps instead it’s because the no arguments fit in my head better
But didn’t I engage with the arguments? I read them, tried to understand, and remained unconvinced.
I still haven’t resolved this. Did I do the dumb thing?
Discussion heavily using a metaphor about dragons, from the last 3 months, does anyone recall? I looked briefly.
Edit to say the post is The Problem
I would hypothesize that statistically the level of social grace of a person tends to stay largely the same over the course of their life
To be more precise, it’s that their social grace relative to their peers would be constant. Assume this is true. Now your hypothesis to explain this would be
I think that lack of social grace is strongly related to ASD, which is relatively immutable
Counter hypothesis: Social grace is learnable. When you do or say something, people around you can signal positively or negatively. Given enough signals, you can figure out what parts of your words/actions elicit positive or negative responses.
Then why do some people plateau in social grace?
In relatively uncalibrated individuals the training data is more costly to acquire: they’re relatively uncalibrated, so more likely to elicit negative response than their peers.
If you accumulate enough negative signals, then you’re out of the tribe.
Thus they’ve less opportunities to learn, either because they avoided many, or were ejected from their peer group.
miscalibration gets relatively worse over time: their peers improve faster.
This hypothesis would explain some ASD people’s consistent lack of social grace, even given lots of potential opportunities to learn, over a long time, even if they were equally perceptive of social signals.
What do you think?
Are you satisfied with your hypothesis? Do you have any interest in selecting and testing good answers to the question, or would you prefer to move on to other things?