“Nono, you have been misled. I *do* have a hero license.”
Emrik
The “you-can-just” alarm
Two Prosocial Rejection Norms
The underappreciated value of original thinking below the frontier
I feel like the terms for public/private beliefs are gonna crash with the fairly established terminology for independent impressions and all-things-considered beliefs (I’ve seen these referred to as “public” and “private” beliefs before, but I can’t remember the source). The idea is that sometimes you want to report your independent impressions rather than your Aumann-updated model of the world, because if everyone does the latter it can lead to double-counting of evidence and information cascades.
Information cascades develop consistently in a laboratory situation in which other incentives to go along with the crowd are minimized. Some decision sequences result in reverse cascades, where initial misrepresentative signals start a chain of incorrect [but individually rational] decisions that is not broken by more representative signals received later. - (Anderson & Holt, 1998)
I don’t want people to conflate the above socioepistemological ideas with the importantly different concepts in this post, so I prefer flagging my beliefs as “legible” or “illegible” to give a sense of how productive/educational I expect talking to me about them will be.
Bonus point: The failure mode of not admitting your own illegible/private beliefs can lead to myopic empiricism, whereby you stunt your epistemic growth by refusing to update on a large class of evidence. Severe cases often exhibit an unnatural tendency to consume academic papers over blog posts.
Yes! The way I’d like it is if LW had a “research group” feature that anyone could start, and you could post privately to your research group.
The Paradox of Expert Opinion
(Update: I’m less optimistic about this than I was when I wrote this comment, but I still think it seems promising.)
Multiplier effects: Delaying timelines by 1 year gives the entire alignment community an extra year to solve the problem.
This is the most and fastest I’ve updated on a single sentence as far back as I can remember. I am deeply gratefwl for learning this, and it’s definitely worth Taking Seriously. Hoping to look into it in January unless stuff gets in the way.
Have other people written about this anywhere?
I have one objection to claim 3a, however: Buying-time interventions are plausibly more heavy-tailed than alignment research in some cases because 1) the bottleneck for buying time is social influence and 2) social influence follows a power law due to preferential attachment. Luckily, the traits that make for top alignment researchers have limited (but not insignificant) overlap with the traits that make for top social influencers. So I think top alignment researchers should still not switch in most cases on the margin.
Good points, but I feel like you’re a bit biased against foxes. First of all, they’re cute (see diagram). You didn’t even mention that they’re cute, yet you claim to present a fair and balanced case? Hedgehog hogwash, I say.
Anyway, I think the skills required for forecasting vs model-building are quite different. I’m not a forecaster, but if I were, I would try to read much more and more widely so I’m not blindsided by stuff I didn’t even know that I didn’t know. Forecasting is caring more about the numbers; model-building is caring more about how the vertices link up, whatever their weights. Model-building is for generating new hypotheses that didn’t exist before; forecasting is discriminating between what already exists.
I try to build conceptual models, and afaict I get much more than 80% of the benefit from 20% of the content that’s already in my brain. There are some very general patterns I’ve thought so deeply on that they provide usefwl perspectives on new stuff I learn weekly. I’d rather learn 5 things deeply, and remember sub-patterns so well that they fire whenever I see something slightly similar, compared to 50 things so shallowly that the only time I think about them is when I see the flashcards. Knowledge not pondered upon in the shower is no knowledge at all.
[Question] What’s the actual evidence that AI marketing tools are changing preferences in a way that makes them easier to predict?
I’m confused. (As in, actually confused. The following should hopefwly point at what pieces I’m missing in order to understand what you mean by a “problem” for the notion.)
Vingean agency “disappears when we look at it too closely”
I don’t really get why this would be a problem. I mean, “agency” is an abstraction, and every abstraction becomes predictably useless once you can compute the lower layer perfectly, at least if you assume compute is cheap. Balloons!
Imagine you’ve never seen a helium balloon before, and you see it slowly soaring to the sky. You could have predicted this by using a few abstractions like density of gases and Archimedes’ principle. Alternatively, if you had the resources, you could make the identical prediction (with inconsequentially higher precision) by extrapolating from the velocities and weights of all the individual molecules, and computed that the sum of forces acting on the bottom of the balloon exceeds the sum acting on the top. I don’t see how the latter being theoretically possible implies a “problem” for abstractions like “density” and “Archimedes’ principle”.
To be honest, the fact that Eliezer is being his blunt unfiltered self is why I’d like to go to him first if he offered to evaluate my impact plan re AI. Because he’s so obviously not optimising for professionalism, impressiveness, status, etc. he’s deconfounding his signal and I’m much better able to evaluate what he’s optimising for.[1] Hence why I’m much more confident that he’s actually just optimising for roughly the thing I’m also optimising for. I don’t trust anyone who isn’t optimising purely to be able to look at my plan and think “oh ok, despite being a nobody this guy has some good ideas” if that were true.
And then there’s the Graham’s Design Paradox thing. I think I’m unusually good at optimising purely, and I don’t think people who aren’t around my level or above would be able to recognise that. Obviously, he’s not the only one, but I’ve read his output the most, so I’m more confident that he’s at least one of them.
- ^
Yes, perhaps a consequentialist would be instrumentally motivated to try to optimise more for these things, but the fact that Eliezer doesn’t do that (as much) just makes it easier to understand and evaluate him.
- ^
I’m curious exactly what you meant by “first order”.
Just that the trade-off is only present if you think of “individual rationality” as “let’s forget that I’m part of a community for a moment”. All things considered, there’s just rationality, and you should do what’s optimal.
First-order: Everyone thinks that maximizing insight production means doing IDA* over idea tree. Second-order: Everyone notices that everyone will think that, so it’s no longer optimal for maximizing insights produces overall. Everyone wants to coordinate with everyone else in order to parallelize their search (assuming they care about the total sum of insights produced). You can still do something like IDA* over your sub-branches.
This may have answered some of your other questions. Assuming you care about the alignment problem being solved, maximizing your expected counterfactual thinking-contribution means you should coordinate with your research community.
And, as you note, maximizing personal credit is unaligned as a separate matter. But if we’re all motivated by credit, our coordination can break down by people defecting to grab credit.
How much should you focus on reading what other people do, vs doing your own things?
This is not yet at practical level, but: Let’s say we want to approach something like a community-wide optimal trade-off between exploring and exploiting, and we can’t trivially check what everyone else is up to. If we think the optimum is something obviously silly like “75% of researchers should Explore, and the rest should Exploit,” and I predict that 50% of researchers will follow the rule I follow, and all the uncoordinated researchers will all Exploit, then it is rational for me to randomize my decision with a coinflip.
It gets newcomblike when I can’t check, but I can still follow a mix that’s optimal given an expected number of cooperating researchers and what I predict they will predict in turn. If predictions are similar, the optimum given those predictions is a Schelling point. Of course, in the real world, if you actually had important practical strategies for optimizing community-level research strategies, you would just write it up and get everyone to coordinate that way.
I worry for people who are only reading other people’s work, like they have to “catch up” to everyone else before they have any original thoughts of their own.
You touch on many things I care about. Part (not the main part) of why I want people to prioritize searching neglected nodes more is because Einstellung is real. Once you’ve got a tool in your brain, you’re not going to know how to not use it, and it’ll be harder to think of alternatives. You want to increase your chance of attaining neglected tools and perspectives to attack long-standing open problems with. After all, if the usual tools were sufficient, why are they long-standing open problems? If you diverge from the most common learning paths early, you’re more likely to end up with a productively different perspective.
It’s too easy to misunderstand the original purpose of the question, and do work that technically satisfies it but really doesn’t do what was wanted in a broader context.
I’ve taken to calling this “bandwidth”, cf. Owen Cotton-Barratt.
Re the “Depth-first vs Breadth-first” distinction for idea development: IDA* is ok as far as a loose analogy to personally searching the idea tree goes, but I think this is another instance where there’s a (first-order) trade-off between individual epistemic rationality and social epistemology.
What matters is that someone discovers good ideas on AI alignment, not whether any given person does. As such, we can coordinate with other researchers in order to search different branches of the idea tree, and this is more like multithreaded/parallel/distributed tree search.
We want to search branches that are neglected, in our comparative advantage, and we shouldn’t be trying to maximise the chance that we personally discover the best idea. Instead, we should collectively act according to the rule that maximises the chance that someone in the community discovers the best idea. Individually, we are parallel threads of the same search algorithm.
- Are you allocated optimally in your own estimation? by 20 Aug 2022 21:19 UTC; 28 points) (EA Forum;
- 8 Jun 2022 21:37 UTC; 19 points) 's comment on Deference Culture in EA by (EA Forum;
This is one of the most important reasons why hubris is so undervalued. People mistakenly think the goal is to generate precise probability estimates for frequently-discussed hypotheses (a goal in which deference can make sense). In a common-payoff-game research community, what matters is making new leaps in model space, not converging on probabilities. We (the research community) are bottlenecked by insight-production, not marginally better forecasts or decisions. Feign hubris if you need to, but strive to install it as a defense against model-dissolving deference.
- 11 Oct 2022 6:38 UTC; 3 points) 's comment on Why defensive writing is bad for community epistemics by (EA Forum;
- 31 Aug 2022 3:57 UTC; 1 point) 's comment on Open Thread: June — September 2022 by (EA Forum;
Coming back to this a few showers later.
A “cheat” is a solution to a problem that is invariant to a wide range of specifics about how the sub-problems (e.g. “hard parts”) could be solved individually. Compared to an “honest solution”, a cheat can solve a problem with less information about the problem itself.
A b-cheat (blind) is a solution that can’t react to its environment and thus doesn’t change or adapt throughout solving each of the individual sub-problems (e.g. plot armour). An a-cheat (adaptive/perceptive) can react to information it perceives about each sub-problem, and respond accordingly.
ML is an a-cheat because even if we don’t understand the particulars of the information-processing task, we can just bonk it with an ML algorithm and it spits out a solution for us.
In order to have a hope of finding an adequate cheat code, you need to have a good grasp of at least where the hard parts are even if you’re unsure of how they can be tackled individually. And constraining your expectation over what the possible sub-problems or sub-solutions should look like will expand the range of cheats you can apply, because now they need to be invariant to a smaller space of possible scenarios.
If effort spent on constraining expectation expands the search space, then it makes sense to at least confirm that there are no fully invariant solutions at the shallow layer before you iteratively deepen and search a larger range.
This relates to Wason’s 2-4-6 problem, where if the true rule is very simple like “increasing numbers”, subjects continuously try to test for models that are much more complex before they think to check the simplest models.
This is of course because they have the reasonable expectation that the human is more likely to make up such rules, but that’s kinda the point: we’re biased to think of solutions in the human range.
Limiting case analysis is when you set one or more variables of the object you’re analysing to their extreme values. This may give rise to limiting cases that are easier to analyse and could give you greater insights about the more general thing. It assumes away an entire dimension of variability, and may therefore be easier to reason about. For example, thinking about low-bandwidth oracles (e.g. ZFP oracle) with cleverly restrained outputs may lead to general insights that could help in a wider range of cases. They’re like toy problems.
”The art of doing mathematics consists in finding that special case which contains all the germs of generality.” — David Hilbert
Multiplex case analysis is sorta the opposite, and it’s when you make as few assumptions as possible about one or more variables/dimensions of the problem while reasoning about it. While it leaves open more possibilities, it could also make the object itself more featureless, fewer patterns, easier to play with in your working memory.
One thing to realise is that it constrains the search space for cheats, because your cheat now has to be invariant to a greater space of scenarios. This might make the search easier (smaller search space), but it also requires a more powerfwl or a more perceptive/adaptive cheat. It may make it easier to explore nodes at the base of the search tree, where discoveries or eliminations could be of higher value.
This can be very usefwl for extricating yourself from a stuck perspective. When you have a specific problem, a problem with a given level of entropy, your brain tends to get stuck searching for solutions in a domain that matches the entropy of the problem. (speculative claim)It relates to one of Tversky’s experiments (I have not vetted this), where subjects were told to iteratively bet on a binary outcome (A or B), where P(A)=0.7. They got 2 money for correct and 0 for incorrect. Subjects tended to try to bet on A with frequency that matched the frequency of the outcome. Whereas the highest EV strategy is to always bet on A.
This also relates to the Inventor’s Paradox.
”The more ambitious plan may have more chances of success […] provided it is not based on a mere pretension but on some vision of the things beyond those immediately present.” ‒ Pólya
Consider the problem of adding up all the numbers from 1 to 99. You could attack this by going through 99 steps of addition like so:
Or you could take a step back and find a more general problem-solving technique (an a-cheat). Ask yourself, how do you solve all 1-iterative addition problems? You could rearrange it as:
To land on this, you likely went through the realisation that you could solve any such series with and add if is odd.
The point being that sometimes it’s easier to solve “harder” problems. This could be seen as, among other things, an argument for worst-case alignment.
- 10 Aug 2022 23:38 UTC; 8 points) 's comment on Emrik’s Quick takes by (EA Forum;
How do you account for the fact that the impact of a particular contribution to object-level alignment research can compound over time?
Let’s say I have a technical alignment idea now that is both hard to learn and very usefwl, such that every recipient of it does alignment research a little more efficiently. But it takes time before that idea disseminates across the community.
At first, only a few people bother to learn it sufficiently to understand that it’s valuable. But every person that does so adds to the total strength of the signal that tells the rest of the community that they should prioritise learning this.
Not sure if this is the right framework, but let’s say that researchers will only bother learning it if the strength of the signal hits their person-specific threshold for prioritising it.
Number of researchers are normally distributed (or something) over threshold height, and the strength of the signal starts out below the peak of the distribution.
Then (under some assumptions about the strength of individual signals and the distribution of threshold height), every learner that adds to the signal will, at first, attract more than one learner that adds to the signal, until the signal passes the peak of the distribution and the idea reaches satiation/fixation in the community.
If something like the above model is correct, then the impact of alignment research plausibly goes down over time.
But the same is true of a lot of time-buying work (like outreach). I don’t know how to balance this, but I am now a little more skeptical of the relative value of buying time.
Importantly, this is not the same as “outreach”. Strong technical alignment ideas are most likely incompatible with almost everyone outside the community, so the idea doesn’t increase the number of people working on alignment.
Would be cool if LessWrong hosted subforums/bubbles/research-groups for anyone who wanted to start one and invite their friends. You would have the ability to write a post only to your bubble (visible on your bubble’s frontpage or a private filter to the main frontpage) or choose to crosspost it to main as well. Having the bubbles be on LW provides them a little prestige boost and could stimulate some folk to initiate new research covens for alignment or whatever (or *cough* social epistemology research bubble maybe).
You could also have the option to filter karma so you only see the karma assigned by people in your bubble. Or, just like you can subscribe to get notified when people post, you could “subscribe” to prioritise their karma too. You could make a custom karma-filter individual to you by subscribing to people or groups whose opinions you trust. And the individual-filtered karma could be transitive as well, according to some parameters you set yourself—similar to plex&co’s EigenTrust project except it’d be EigenKarma. There’s more cool stuff here, but I’m probably never going to actually finish a post about it, so better suggest it briefly to someone than not suggest it at all.
OK, done daydreaming. Back to work.
- 1 Nov 2022 15:56 UTC; 11 points) 's comment on Draft Amnesty Day: an event we might run on the Forum by (EA Forum;
- 17 Nov 2022 22:47 UTC; 2 points) 's comment on Elliot Temple’s Quick takes by (EA Forum;
“I can move my mind so it is as though I’ve never seen a water bottle before”
I liken this to one of my favourite concepts, shoshin—”a beginner’s mind”. Entering a state of shoshin requires perceptual dexterity.
One of the problems it tries to overcome, and which you describe in different words, is the Einstellung effect—when your perception of a problem is stuck in some way. And that’s one of the reasons perceptual dexterity is so important in original research (and especially math & philosophy).
Kinda surprised you didn’t mention purpose-tracking, for while you’re trying to do a thing—any thing. Arguably the most important skill I acquired from the Sequences, and that’s a high bar.