niplav
Range and Forecasting Accuracy
There are further shards of a track record strewn across the internet:
The bets registry shows two bets lost by Eliezer Yudkowsky, none won
The public figure profile on Metaculus has no resolved predictions yet
True. I just think there’s so little activity here on the tags portal that marginally less caution is better than marginally more caution.
Also strong-upvoted your tags contribution :-)
It probably took me less time to create the tag than it took you to write that comment ;-)
It’s relevant to note that Legg is also doing a bunch of safety research, much of it listed here, I don’t see why it should be obvious that he’s making a less respectable attempt to solve the problem than other alignment researchers. (He’s working on the causal incentives framework, and on stuff related to avoiding wireheading.)
Also, wasn’t deepmind an early attempt at gathering researchers to be able to coordinate against arms races?
I’ll let this be my chance to ask whether the Alignment Newsletter Podcast is on hold or finished? I don’t think there was a publicized announcement of hibernation or termination.
The documentation says it’s using the Levenberg-Marquardt algorithm, which, as far as I can understand, doesn’t make any assumptions about the data, but only converges towards local minima for the least-squares distance between dataset and the output of the function.
(I don’t think this will matter much for me in practice, though).
scipy.optimize.curve_fit Is Awesome
Thanks a lot!
In other words, is there a meta-review on meditation research? (Then we should ask Scott Alexander to review it.)
Not sure whether that’s what you’re looking for, but I like the Wikipedia pages on effects of meditation, brain activity and meditation, meditation and pain and mechanisms of mindfulness meditation (clearly just a starting point, but asking for a literature review for research on meditation sounds like asking for a literature review on the research on foreign aid—nearly impossible to do both exhaustive and in-depth, just because of the sheer scope of the field).
Hi,
that is super fascinating. Would you be willing to share your data on this? I have some existing interest in exactly that question.
On the danger of being unhelpful, it’s being proactive about global risks now.
There was also H. G. Wells, who wrote about GCRs through nuclear weapons. See more in this comment.
It looks like longtermism and concern over AI risk are going to become topics of the culture war like everything else (c.f. Timnit Gebru on longtermism), with the “left” (especially the “social/woke left”) developing an antipathy against longtermism & concerns against x-risk through AGI.
That’s a shame, because longtermism & concerns about AGI x-risk per se have ~0 conflict with “social/woke left” values, and potentially a lot of overlap (from moral trade & compute governance (regulating big companies! Preventing the CO₂ released by large models!) to more abstract “caring about future generations”). But the coalitional affiliations are too strong—something Elon Musk & techbros care about can’t be good.
Interested in 3.
Be wary of status. Be wary of pursuing status (directly on indirectly via a thing you say you want), be wary of assuming things about others because of a certain status or lack thereof. Status is one class of finite games which are played to win, and induce a scarcity mindset. Avoid playing these types of games.
Counterpoint: Status can be very useful—it’s often insanely motivating and feels very fun, and impossible to root out in humans, so it’s better to redirect status to good things and away from bad things. I don’t think you can do without status, so use it like Hercules used two rivers to clean out the Augean stables.
For me, the implication of standing at the fulcrum of human history is to…read a lot of textbooks and think about hairy computer science problems.
That seems an odd enough conclusion to make it quite distinct from most other people in human history.
If the conclusion were “go over to those people, hit them on the head with a big rock, and take their women & children as slaves” or “acquire a lot of power”, I’d be way more careful.
Looking over some of my notes from the book, I should perhaps have written “much more than I previously thought”—just learning that there is a pretty hefty debate in the theory of molecular evolution about whether selection plays a role at all made me update towards placing less importance on selection (the existence of an academic debate on the topic makes it at least not a shut-and-close case).
But I’m still quite much working with the book, so this belief might be overturned quickly or refined once I re-read the chapter.
Datapoint: I didn’t have such an experience before deciding to sign up.
Probably not the most significant updates, but tracking all changes to beliefs creates significant overhead, so I don’t remember the most important one. Often, I’m unsure whether something counts as a proper update versus learning something new or refining a view, but whatever, here’s two examples:
Reading the excellent blog Traditions of Conflict, I have become more confused about how egalitarian hunter-gatherer societies really are. The blog describes instances of male cults controlling resources in tribes, high prevalence of arranged polygynous marriage, the absence of matriarchies—which doesn’t fit well with my previously believed degree of egalitarianism in those societies. Confusing, perhaps due to the sampling bias of the writer (who is mostly interested in this phenomenon of male dominance, neglecting more egalitarian societies). However, checking Wikipedia confirms the suspicious absence of matriarchies (and if hunter-gatherers were basically egalitarian, we should (by random error) see as many matriarchal as patriarchal societies).
Odd.
Another decent update is on the importance of selection in evolution, reading Gillespie on population genetics has updated me towards believing that random mutation and drift are much more important than selection.
Apparently some major changes on Range and Forecasting Accuracy caused it to be re-submitted.
Quick summary:
Code was rewritten in Python
Moved Results section to the top, and rewrote it, it should be easier to understand
Expanded the illustrative example
Added logistic/exponential curve-fits to all section, which enable to extrapolate Brier scores to longer-range questions (under certain assumptions)
This allows to estimate how far into the future we can see before our forecasts become uniformly random
Unfortunately, there is no single nice number I can give for our predictive horizon (...yet)
P-values in some places!