I’ve done similarly. It’s actually remarkable how little time it takes to overview the history of breakthroughs in a sub-field, or all the political and military leaders of an obscure country during a particular era, or the history of laws and regulations of a a particular field.
Question to muse over —
Given how inexpensive and useful it is to do this, why do so few people it?
Apprenticeship seems promising to me. It’s died out in most of the world, but there’s still formal apprenticeship programs in Germany that seem to work pretty well.
Also, it’s a surprisingly common position among very successful people I know that young people would benefit from 2 years of national service after high school. It wouldn’t have to be military service — it could be environmental conservation, poverty relief, Peace Corps type activities, etc.
We actually have reasonable control groups for this both in countries with mandatory national service and the Mormon Church, whom the majority of their members go on a 2-year mission. I haven’t looked at hard numbers or anything, but my sense is that both countries with national service and Mormons tend to be more successful than similar cohorts that don’t undergo such experiences.
To one small point:
>After all there’s a surprising lack of studies (aka 0 that I could find, and I dug for them a lot) with titles around the lines of “Economic value of university degree when controlling for IQ, time lost and student debt”.
I’m reminded of Upton Sinclair’s quote,
“It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
Just tracing the edges of hard problems is huge progress to solving them. Respect.
First, small technical feedback — do you think there’s some classification of these factors, however narrow or broad, that could be sub-headlines?
For instance, #24 and #29 seem to be similar things:
#24 As the overall maze level rises, mazes gain a competitive advantage over non-mazes.
#29 As maze levels rise, mazes take control of more and more of an economy and people’s lives.
As do #27 and #28:
#27: Mazes have reason to and do obscure that they are mazes, and to obscure the nature of mazes and maze behaviors. This allows them to avoid being attacked or shunned by those who retain enough conventional not-reversed values that they would recoil in horror from such behaviors if they understood them, and potentially fight back against mazes or to lower maze levels. The maze embracing individuals also take advantage of those who do not know of the maze nature. It is easy to see why the organizations described in Moral Mazes would prefer people not read the book Moral Mazes.
#28: Simultaneously with pretending to the outside not to be mazes, those within them will claim if challenged that everybody knows they are mazes and how mazes work.
While it’s hard to pin down exactly what the categories would be, It seems that the first cluster is about something like feedback loops and the second culture is about something like deceit, self-deceit, etc.
The categories could even be very broad like “Inherent Biases”, “Incentives and Rewards”, “Feedback Loops”, etc. Or could be narrower. But it’s difficult to follow a list of 37 propositions, some of which are relatively simple and self-contained and others are synthesis, conclusion, and extrapolation of previous points.
Ok, second thought —
This is all largely written from the point of view of how bad these things are as a participant. I bet it’d be interesting to flip the viewpoint and analysis and explore it from the view of a leader/executive/etc who was trying to forestall these effects.
For instance, your #4 seems important:
#4: Middle management performance is inherently difficult to assess. Maze behaviors systematically compound this problem. They strip away points of differentiation beyond loyalty to the maze and willingness to sacrifice one’s self on its behalf, plus politics. Information and records are destroyed. Belief in the possibility of differentiation in skill level, or of object-level value creation, is destroyed.
Ok, granted middle management performance is inherently difficult to assess.
So uhh, how do we solve that? Thoughts? Pointing out that this is a crummy equilibrium can certainly help inspire people to notice and avoid participating in it, but y’know, we’ve got institutions and we’ll probably have institutions for forever-ish, coordination is hard, etc etc, so do you have thoughts on surmounting the technical problems here? Not the runaway feedback loops — or those, too, sure — but the inherent hard problem of assessing middle management performance?
So if an arms race is good or not basically depends on if the “good guys” are going to win (and remain good guys).
Quick thought — it’s not apples and apples, but it might be worth investigating which fields hegemony works well in, and which fields checks and balances works well in:
There’s also the question with AGI of what we’re more scared of — one country or organization dominating the world, or an early pioneer in AGI doing a lot of damage by accident?
#2 scares me more than #1. You need to create exactly one resource-commandeering positive feedback loop without an off switch to destroy the world, among other things.
Lots of great comments already so not sure if this will get seen, but a couple possibly useful points —
Metaphors We Live By by George Lakoff is worth a skim — https://en.wikipedia.org/wiki/Metaphors_We_Live_By
Then I think Wittgenstein’s Tractatus is good, but his war diaries are even better — http://www.wittgensteinchronology.com/7.html
“[Wittgenstein] sketches two people, A and B, swordfighting, and explains how this sketch might assert ‘A is fencing with B’ by virtue of one stick-figure representing A and the other representing B. In this picture-writing form, the proposition can be true or false, and its sense is independent of its truth or falsehood. LW declares that ‘It must be possible to demonstrate everything essential by considering this case’.”
Lakoff illuminates some common metaphors — for example, a positive-valence mood in American English is often “up” and a negative-valence mood in American English is often “down.”
If you combine Lakoff and Wittgenstein, using an accepted metaphor from your culture (“How are you?” “I’m flying today”) makes the picture you paint for the other person correspond to your mood (they hear the emphasized “flying” and don’t imagine you literally flying, but rather in a high-positive valence mood) — then you’re in the realm of true.
There’s independently some value in investigating your metaphors, but if someone asks me “Hey how’d custom building project your neighbor was doing go?” and I answer “Man, it was a fuckin’ trainwreck” — you know what I’m saying: not only did the project fail, but it failed in a way that caused damage and hassle and was unaesthetic, even over and beyond what a normal “mere project failure” would be.
The value in metaphors, I think, is that you can get high information density with them. “Fuckin’ trainwreck” conveys a lot of information. The only more denser formulation might be “Disaster” — but that’s also a metaphor if it wasn’t literally a disaster. Metaphors are sneaky in that way, we often don’t notice them — but they seem like a valid high-accuracy usage of language if deployed carefully.
(Tangentially: Is “deployed” there a metaphor? Thinking… thinking… yup. Lakoff’s book is worth skimming, we use a lot more metaphors than we realize...)
Lots of useful ideas here, thanks.
Did you play AI Dungeon yet, by chance?
Playing it was a bit of a revelation for me. It doesn’t have to get much better at all to obsolete the whole lower end of formulaic and derivative entertainment...
Multiple fascinating ideas here. Two thoughts:
1. Solo formulation → open to market mechanism?
Jumping to your point on Recursion — I imagine you could ask participants to (1) specify their premises, (2) specify their evidence for each premise, (3) put confidence numbers on given facts, and (4) put something like a “strength of causality” or “strength of inference” on causal mechanisms, which collectively would output their certainty.
In this case, you wouldn’t need to have two people who want to wager against each other, but rather anyone with a difference in confidence of a given fact or the (admittedly vague) “strength of causality” for how much a true-but-not-the-only-variable input effects a system.
Something along these lines might let you use the mechanism more as a market than an arbiter.
2. Discount rate?
After that, I imagine most people would want some discount rate to participate in this — I’m trying to figure out what odds I’d accept if I was 99% sure in a proposition to wager against someone… I don’t think I’d lay 80:1 odds, even though it’s in theory a good bet, just because the sole fact that someone was willing to bet against me at such odds would be evidence I might well be wrong!
The likelihood that anyone participating in a thoughtful process along these lines and laying real money (or other valuable commodity like computing power) against me means there’s probably a greater than 1 in 50 chance I made an error somewhere.
Of course, if the time for Alice and Bob to prepare arguments was sufficiently low, if the resource pool Kelly Criterion style was sufficiently large, and there was sufficient liquidity to get regression to the mean on reasonable timeframes to reduce variance, then you’d be happy to play with small discounts if you were more-right-than-not and reasonably well-calibrated.
Anyway — this is fascinating, lots of ideas here. Salut.
I just wanted to say this was a really fun read. I hadn’t considered the multiple ways people could get to the right or wrong answer.
I think this starts to make more sense if you realize that there’s a lot of organizations where a manager can’t make an outsized improvement in results but can do a lot of damage; in those places, selection effects are going to give you risk-averse conforming people.
But in places with very objective performance numbers — finance and sales in particular — there’s plenty of eccentric managers and leaders.
Same with tech and inventing, though eventually a lot of companies that were risk-seeking and innovative do drift to risk-averse and conforming. It’s admirable when organizations fight that off. I don’t have very many data points, but the managers I’ve met from Apple have all seemed noticeably brilliant and preserved their personal eccentricities, though there is a certain “Apple polish” in way of speaking and grooming that seems to be almost de rigeur.
That’s probably not a bad standard to be expected to conform to, though, since it’s like, pretty cool.
Okay, one more — Grimes’s “We Appreciate Power” is an electro-pop song about artificial intelligence, simulation, and brain uploading among other things:
A lot of the kids that like it no doubt enjoy it for the rebellious countersignaling aspect of it, combined with catchy beat.
But I like it on, I think, a different level than a 15 year old that’d like it. When I was 15, I listened to Rage Against the Machine — I had no idea what the heck RATM was talking about with Ireland and burning crosses or whatever, it was just, like, loud and rebellious and cool.
It’s not groundbreaking to say people can appreciate things on different levels, but I wonder how much my intellectual enjoyment of We Appreciate Power backpropagates into liking the beat, vocal range, tempo, etc more.
[Bridge: Grimes & HANA]
And if you long to never dieBaby, plug in, upload your mindCome on, you’re not even aliveIf you’re not backed up on a driveAnd if you long to never dieBaby, plug in, upload your mindCome on, you’re not even aliveIf you’re not backed up, backed up on a drive
Relatedly — I used to find motorcycles swerving through traffic dangerous/ugly.
After I learned to ride a motorcycle, it (1) now is more predictable and seems less dangerous and (2) now seems beautiful/reasonable/cool rather than ugly/random/annoying.
Martial valor is another interesting one that people tend to find beautiful or ugly, and rarely if ever neutral.
I wonder if there’s some component of simulating yourself either participating in an environment or activity and imagining how you’d feel.
Deserts — though there’s counterintuitive things like them being cold at night — probably seem more tractable on how to navigate them than swamps.
I wonder if people see a patriotic rally and implicitly attempt to simulate “what the hell would I be doing if I was there, like, waving a flag around???” — and mentally encode it ugly. Vice-versa being at a spiritual retreat for people who’d enjoy a rally.
There’s quite likely some “implicitly mentally trying it on” going on, no?
You know what, I think LessWrong has collectively been worth more than $1,672 to me — especially after the re-launch. Heck, maybe even Petrov Day alone was. Incredibly insightful and potentially important.
I’d do this privately, but Eliezer wrote that story about how the pro-social people are too quiet and don’t announce it. So yeah, I’m in for $1,672. Obviously, I wouldn’t have done this if some knucklehead had nuked the site.
Now for the key question —
What kind of numbers do we need to put together to get another Ben Pace quality dev on the team? (And don’t tell us it’s priceless, people were willing to sell out your faith in humanity for less than the price of a Macbook Air! ;)
And yeah, mechanics for donating to LW specifically? Can follow up on email but I imagine it’d be good to have in this thread.
Edit: Before anyone suggests I donate to some highly-ranked charity, after I’d had some success in business I was in the nonprofit world for years and always 100% volunteer, have spent an immense amount of hours both understanding the space and getting things done, and was reasonably effective though not legendarily so or anything. By my quick back of the envelope math, I imagine any given large country’s State Department would have paid $50,000 to $100,000 to have Petrov Day happen successfully in such a public way. Large corporations — I’ve worked with a few — maybe double that range. It was a really important thing and while “budget for hiring developers on a site that facilitates discussion of rationality” has far more nebulous and hard-to-pin-down value than some very worthy projects, it’s first a threshold-break thing where a little more might produce much more results, and I think this site can be really important. If I might suggest something, though, perhaps an 80⁄20 eng-driven growth plan for the site that prioritizes preserving quality and norms would also make sense? We should have 10x the people here. It’s very doable. I’m really busy but happy to help if I can. I think a lot of us would be happy to help make it happen if y’all would make it a little easier to know how. Something special is happening here.
Edit2: Okay, my donation is now conditional on banning whoever downvoted this ;) - just kidding. But man, what a strange mix of really great people and total idiots here huh? “I liked this a lot and I’d like to give money.” WTF who does this guy think he is. Oh, me? Just someone trying to support the really fucking cool thing that’s happening and asking for the logistics of doing so to be posted in case anyone else thinks it’s been really cool and great for their life.
What an incredible experience.
Felt like I got to understand myself a bit better, got exposed to a variety of arguments I never would have anticipated, forced to clarify my own thoughts and implications, did some math, did some sanity-check math on “what’s the value of destroying some of Ben Pace’s faith in humanity” (higher than any reasonable dollar amount alone, incidentally — and that’s just one variable)… and yeah, this was really cool and legit innovative.
We should make sure the word about this gets out more.
We need more people on LessWrong, and more stuff like this.
People thinking this is just a chat board should think a little bigger. There’s some real visionary thinking going on here, and an exceptionally smart and thoughtful community. I’m really grateful I got to see and participate in this. Thanks for all the great work — and for trusting me. Seriously. Y’all are aces.
(1) I want this too and would use it and participate more.
(2) Following logically from that, some sort of “Lists” feature like Twitter might be good, EX:
(“Friending” is typically double-confirm, lists would seem much easier and less complex to implement. Perhaps lists, likewise, could be public or private)
I’m actually not sure what you mean by “running down the stack.” Do you mean “when I get distracted I mentally review my whole stack, from most recent item added to most ancient item”?
Well, of course, it’s whatever works for you.
For a simple example, let’s say I’m (1) putting new sheets on my bed, and then (2) I get an incoming phone call, which results in me simultaneously needing to (3 and 4) send a calendar invite and email while still on the phone.
I’ll pick which of the cal invite or email I’m doing first. Let’s say I decide I’m sending the cal invite first.
(4) Send cal invite—done—off stack.
(3) Send email—done—off stack.
(2) Check whether anything else needs to be done before ending call, confirming, etc. If need to do another activity → add it to stack as new (3). If not, end call.
And here’s where the magic happens. I then,
(1) Go finish making the bed.
I’m not fanatic about it, but I won’t get a snack first or anything significant until that done.
Or do you mean “when I get distracted, I ‘pop’ the next item/intention in the stack (the one that was added most recently), and execute that one next (as opposed to some random one).
This, yes. Emphasis added.
Less payoff to getting distracted? To being distractible?
Why is that? Because if you get distracted you have to complete the distraction?
Well, I can speculate on theory but I’ll just say empirically — it works for me.
But let’s speculate with an example.
You’re midway through cleaning your kitchen and you remember you needed to send some email.
If you don’t really wanna clean your kitchen deep down, you’re likely to wind up on email or Twitter or LessWrong instead.
Now that’s fine, if I see a second email I want to reply to, I’ll snipe that.
But at the end, I have to go finish the kitchen unless things have materially changed.
Knowing there’s no payoff in “escaping” is probably part of it. It probably shapes real-time cost/benefit tradeoffs somewhat. It means less cognitive processing time needed to pick next task. It makes one pick tasks slightly more carefully knowing you’ll finish them. It leads to single-tasking and focus.
Umm, probably a lot more. I’m not fanatic about it, I’ll shift gears if it’s relevant but I don’t like to do so.
Do we have any lawyers here at LessWrong?
Would it be possible to legitimately write some sort of standardized financial instrument that functions as a loan with no repayment date, with options for conversion into charitable donation?
Speculations (non-lawyer here) —
(1) Maybe there’s something equivalent to a SAFE Note (invented by YCombinator to simplify and standardize startup financing in a way friendly to both parties). It seems like a decent jumping-off point:
(2) On the other hand, there’s a variety of mechanisms where you can’t just do clever stuff. And there’s a variety of arcane rules. You can, I think, donate property that’s appreciated in value without paying capital gains first for instance, but maybe there’s specific definitions around the timing of cash flows, donations, and deductions?
(3) On the other-other hand, seems like American tax policy in general is very amenable to people supporting worthy charitable causes.
(4) On the other-other-other-hand, you’d have to make sure it’s not game-able and doesn’t result in strange second-order consequences.
(5) And finally, if it’s ambiguous, it seems like the type of thing where it’d be possible to get some sort of preliminary ruling from the relevant authorities. (Presumably the Treasury/IRS, but maybe someone else.)
Seems like a good idea though? If someone donates $10k a year for 5 years, it seems reasonable that they’d be able to write off that $50k at the end of the end of 5 years.
You guys are total heroes. Full stop. In the 1841 “On Heroes” sense of the word, which is actually pretty well-defined. (Good book, btw.)