Maybe the V1 dopamine receptors are simply useless evolutionary leftovers (perhaps it’s easier from a developmental perspective)
A taxonomy of objections to AI Risk from the paper:
What sort of epistemic infrastructure do you think is importantly missing for the alignment research community?
What’s your take on Elicit?
What are the best examples of progress in AI Safety research that we think have actually reduced x-risk?
(Instead of operationalizing this explicitly, I’ll note that the motivation is to understand whether doing more work toward technical AI Safety research is directly beneficial as opposed to mostly irrelevant or having second-order effects. )
The (meta-)field of Digital Humanities is fairly new. TODO: Estimating its success and its challenges would help me form a stronger opinion on this matter.
One project which implements something like this is ‘Circles’. I remember it was on hold several years ago but seems to be running now—link
I think that generally, skills (including metacognitive skills) don’t transfer that well between different domains and it’s best to practice directly. However, games also give one better feedback loops and easier access to mentoring, so the room for improvement might be larger.
A meta-analysis on transfer from video games to cognitive abilities saw small or null gains:
The lack of skill generalization from one domain to different ones—that is, far transfer—has been documented in various fields of research such as working memory training, music, brain training, and chess. Video game training is another activity that has been claimed by many researchers to foster a broad range of cognitive abilities such as visual processing, attention, spatial ability, and cognitive control. We tested these claims with three random-effects meta-analytic models. The first meta-analysis (k = 310) examined the correlation between video game skill and cognitive ability. The second meta-analysis (k = 315) dealt with the differences between video game players and nonplayers in cognitive ability. The third meta-analysis (k = 359) investigated the effects of video game training on participants’ cognitive ability. Small or null overall effect sizes were found in all three models. These outcomes show that overall cognitive ability and video game skill are only weakly related. Importantly, we found no evidence of a causal relationship between playing video games and enhanced cognitive ability. Video game training thus represents no exception to the general difficulty of obtaining far transfer.
However, A review on study of chess does see some gains, and gains that seem to improve with more time of instruction, but it’s a smaller survey of inadequately designed studies.
Thanks for the concrete examples! Do you have relevant references for these at hand? I could imagine that there might be better ways to solve these issues, or that they somehow mostly cancel out or relatively low problems, so I’m interested to see relevant arguments and case studies.
I don’t think that operationalizing exactly what I mean by a consensus would help a lot. My goal here is to really understand how certain I should be about whether rent control is a bad policy (and what are the important cases where it might not be a good policy, such as the examples ChristianKl gave below).
That’s right, and a poor framing on my part ὠA
I am interested in a consensus among academic economists, or in economic arguments for rent control. Specifically because I’m mostly interested in utilitarian reasoning, but I’d also be curious about what other disciplines have to say.
This sounds like an amazing project and I find it very motivating. Especially the questions around how we’d like future epistemics to be and prioritizing different tools/training.
As I’m sure you are aware, there is a wide academic literature around many related aspects including the formalization of rationality, descriptive analysis of personal and group epistemics, and building training programs. If I understand you correctly, a GPI analog here would be something like an interdisciplinary research center that attempts to find general frameworks with which it would be possible later on to better compare between interventions that aim at improving epistemics and to standardize a goal of “epistemic progress”, with a focus on the most initially promising subdomains?
I think that M only prints something after converging with Adv, and that Adv does not print anything directly to H
Abram, did you reply to that crux somewhere?
I agree that hierarchy can be used only sparingly and still be very helpful. Perhaps just nesting under the core tags, or something similar.
On special posts where that does not seem to be the case that the hierarchy holds, people can still downvote the parent tag. That is annoying, but may reduce work overall.
Also, navigating up/down with arrow keys and pressing enter should allow choice of tags with keyboard only.
1. More people would probably rank tags if it could be done directly through the tag icon instead of using the pop-up window.
2. When searching for new tags, I’d like them sorted probably by relevance (say, some preference for: being a prefix, being a popular tag, alphabetical ordering).
3. When browsing through all posts tagged with some tag, I’d maybe prefer to see higher karma posts first, or to have it factored in the ordering.
4. Perhaps it might be easier to have a hierarchy of tags—so that voting for Value learning also votes for AI Alignment say