Yes And is an improv technique where you keep the energy in a scene alive by going w/ the other persons suggestion and adding more to it. “A: Wow is that your pet monkey? B: Yes and he’s also my doctor!”
Yes And is generative (creates a lot of output), as opposed to Hmm No which is critical (distills output)
A lot of the Sequences is Hmm No
It’s not that Hmm No is wrong, it’s that it cuts off future paths down the Yes And thought-stream.
If there’s a critical error at the beginning of a thought that will undermine everything else then it makes sense to Hmm No (we don’t want to spend a bunch of energy on something that will be fundamentally unsound). But if the later parts of the thought stream are not closely dependent on the beginning, or if it’s only part of the stream that gets cut off, then you’ve lost a lot of potential value that could’ve been generated by the Yes And.
In conversation yes and is much more fun, which might be why the Sequences are important as a corrective (yeah look it’s not fun to remember about biases, but they exist and you should model/include them)
Write drunk, edit sober. Yes And drunk, Hmm No in the morning.
bgold
Solving Math Problems by Relay
Announcing FAR Labs, an AI safety coworking space
2020′s Prediction Thread
[Question] What price would you pay for the RadVac Vaccine and why?
Protectionism will Slow the Deployment of AI
[Question] What’s the best approach to curating a newsfeed to maximize useful contrasting POV?
Running Effective Structured Forecasting Sessions
[Question] Do bond yield curve inversions really indicate there is likely to be a recession?
+1 for noting mistake and for noting the importance of being bold, and asking questions and sharing models even when you’re uncertain.
Your use of the Epistemic status tag—which I think /u/gwern pioneered? - seems good for balancing the value of sharing models while preventing polluting the “idea space” with potentially misleading/untrue things.
[Link] John Carmack working on AGI
Jackpot! An AI Vignette
I have a cold, which reminded me that I want fashionable face masks to catch on so that I can wear them all the time in cold-and-flu season without accruing weirdness points.
- 18 Oct 2019 4:09 UTC; 0 points) 's comment on Pattern’s Shortform Feed by (
so shiny. It’s like, it’s begging to be pressed.
This seems true that there’s a lot of way to utilize forecasts. In general forecasting tends to have an implicit and unstated connection to the decision making process—I think that has to do w/ the nature of operationalization (“a forecast needs to be on a very specific thing”) and because much of the popular literature on forecasting has come from business literature (e.g. How to Measure Anything).
That being said I think action-guidingness is still the correct bar to meet for evaluating the effect it has on the EA community. I would bite the bullet and say blogs should also be held to this standard, as should research literature. An important question for an EA blog—say, LW :) - is what positive decisions it’s creating (yes there are many other good things about having a central hub, but if the quality of intellectual content is part of it that should be trackable).
If in aggregate many forecasts can produce the same type of guidance or better as many good blog posts, that would be really positive.
The larger scientific question was related to Factored Cognition, and getting a sense of the difficulty of solving problems through this type of “collaborative crowdsourcing”. The hope was running this experiment would lead to insights that could then inform the direction of future experiments, in the way that you might fingertip feel your way around an unknown space to get a handle on where to go next. For example if it turned out to be easy for groups to execute this type of problem solving, we might push ahead with competitions between teams to develop the best strategies for context-free problem solving.
In that regard it didn’t turn out to be particularly informative, because it wasn’t easy for the groups to solve the math problems, and it’s unclear if that’s because of the problems selected, the team compositions, the software, etc. So re: the larger scientific question I don’t think there’s much to conclude.
But personally I felt that by watching relay participants I gained a lot of UX intuitions around what type of software design and strategy design is necessary for factored strategies—what I broadly think of as problem solving strategies that rely upon decomposition—to work. Two that immediately come to mind:
Create software design patterns that allow the user to hide/reveal information in intuitive ways. It was difficult, when thrown into a huge problem doc with little context, to know where to focus. I wanted a way for the previous user to only show me the info I needed. For example, the way workflow-y / Roam Research bullet points allow you to hide unneeded details, and how if you click on a bullet point you’re brought into an entirely new context.
When designing strategies try focusing on the return signature: When coming up with new strategies for solving relay problems, at first it was entirely free form. I as a user would jump in, try pushing the problem as far as I could, and leave haphazard notes in the doc. Over time we developed more complex shorthand and shared strategies for solving a problem. One heuristic I now use when developing strategies for problem solving that use decomposition is to prioritizing thinking about what each sub part of the strategy will return to the top caller. That clarifies the interface, simplifies what the person working on the sub strategy needs to do, and promotes composability.
These ideas are helpful because—I posit—we’re faced with Relay Game like problems all the time. When I work on a project, leave it for a week, and come back, I think I’m engaging in a relay between past Ben, present Ben, and future Ben. Some of these ideas informed my design of templates for collaborative group forecasting.
Off the cuff:
Temperance movement in the United States
Much of the radical left movement from the 60s to the 70s (ex. Students for a Democratic Society → Weatherman)
Georgism
The Shakers
Another useful line of inquiry might be factoring out what success for a social movement looks like, find social movements that “succeeded”, and see what happened to the social movements they were competing against.
Why do I not always have conscious access to my inner parts? Why, when speaking with authority figures, might I have a sudden sense of blankness.
Recently I’ve been thinking about this reaction in the frame of ‘legibility’, ala Seeing like a State. State’s would impose organizational structures on societies that were easy to see and control—they made the society more legible—to the actors who ran the state, but these organizational structure were bad for the people in the society.
For example, census data, standardized weights and measures, and uniform languages make it easier to tax and control the population. [Wikipedia]
I’m toying with applying this concept across the stack.
If you have an existing model of people being made up of parts [Kaj’s articles], I think there’s a similar thing happening. I notice I’m angry but can’t quite tell why or get a conceptual handle on it—if it were fully legible and accessible to conscious mind, then it would be much easier to apply pressure and control that ‘part’, regardless if the control I am exerting is good. So instead, it remains illegible.
A level up, in a small group conversation, I notice I feel missed, like I’m not being heard in fullness, but someone else directly asks me about my model and I draw a blank, like I can’t access this model or share it. If my model were legible, someone else would get more access to it and be able to control it/point out its flaws. That might be good or it might be bad, but if it’s illegible it can’t be “coerced”/”mistaken” by others.
One more level up, I initially went down this track of thinking for a few reasons, one of which was wondering why prediction forecasting systems are so hard to adopt within organizations. Operationalization of terms is difficult and it’s hard to get a precise enough question that everyone can agree on, but it’s very ‘unfun’ to have uncertain terms (people are much more likely to not predict then predict with huge uncertainty). I think the legibility concept comes into play—I am reluctant to put a term out that is part of my model of the world and attach real points/weight to it because now there’s this “legible leverage point” on me.
I hold this pretty loosely, but there’s something here that rings true and is similar to an observation Robin Hanson made around why people seem to trust human decision makers more than hard standards.
This concept of personal legibility seems associated with the concept of bucket errors, in that theoretically sharing a model and acting on the model are distinct actions, except I expect often legibility concerns are highly warranted (things might be out to get you)