Multiple large monitors, for programming.
Waterproof paper in the shower, for collecting thoughts and making a morning todo list
Email filters and Priority Inbox, to prevent spurious interruptions while keeping enough trust that urgent things will generate notifications, that I don’t feel compelled to check too often
USB batteries for recharging phones—one to carry around, one at each charging spot for quick-swapping
Yep, one of us edited it to fix the link. Added a GitHub issue for dealing with relative links in RSS in general: https://github.com/LessWrong2/Lesswrong2/issues/2434 .
Note that this would be a very non-idiomatic way to use jQuery. More typical architectures don’t do client-side templating; they do server-side rendering and client-side incremental mutation.
I’m kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm—hash by xor’ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn’t break the overall security of the system. But I don’t see anyone doing this in practice, and also don’t see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it’s either being defeated by political processes in the translation to practice, or it’s weirdly compartmentalized and not engaged with any practical reality or outside views.
In my experience, the motion that seems to prevent mental crowding-out is intervening on the timing of my thinking: if I force myself to spend longer on a narrow question/topic/idea than is comfortable, eg with a timer, I’ll eventually run out of cached thoughts and spot things I would have otherwise missed.
By generativity do you mean “within-domain” generativity?
Not exactly, because Carmack has worked in more than one domain (albeit not as successfully; Armadillo Aerospace never made orbit.)
On those dimensions, it seems entirely fair to compare across topics and assert that Pearl was solving more significant and more difficult problem(s) than Carmack
Agree on significance, disagree on difficulty.
Eliezer has written about the notion of security mindset, and there’s an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don’t think Eliezer’s post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.
An1lam’s recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don’t have security mindset, at least of the type Eliezer described.
My hypothesis is that to acquire security mindset, you have to:
Practice optimizing from a red team/attacker perspective,
Practice optimizing from a defender perspective; and
Practice modeling the interplay between those two perspectives.
So a software engineer can acquire security mindset because they practice writing software which they don’t want to have vulnerabilities, they practice searching for vulnerabilities (usually as an auditor simulating an attacker rather as an actual attacker, but the cognitive algorithm is the same), and they practice going meta when they’re designing the architecture of new projects. This explains why security mindset is very common among experienced senior engineers (who have done each of the three many times), and rare among junior engineers (who haven’t yet). It explains how Eliezer can have security mindset: he alternates between roleplaying a future AI-architect trying to design AI control/alignment mechanisms, roleplaying a future misaligned-AI trying to optimize around them, and going meta on everything-in-general. It also predicts that junior AI scientists won’t have this security mindset, and probably won’t acquire it except by following a similar cognitive trajectory.
Which raises an interesting question: how much does security mindset generalize between domains? Ie, if you put Theo de Raadt onto a hypothetical future AI team, would he successfully apply the same security mindset there as he does to general computer security?
Outside observer takeaway: There’s a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn’t come with an obligation to follow up and defend things; it’s meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.
(Some of the meta-level fighting seemed not-fine, but that’s for another comment.)
Yes, it implies that. The exact level of fidelity required is less straightforward; it’s clear that a perfect simulation must have qualia/consciousness, but small imperfections make the argument not hold, so to determine whether an imperfect simulation is conscious we’d have to grapple with the even-harder problem of neuroscience.
In There’s No Fire Alarm for Artificial General Intelligence Eliezer argues:
A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.
If I have a predetermined set of tests, this could serve as a fire alarm, but only if you’ve successfully built a consensus that it is one. This is hard, and the consensus would need to be quite strong. To avoid ambiguity, the test itself would need to be demonstrably resistant to being clever Hans’ed. Otherwise it would be just another milestone.
I think the engineer mindset is more strongly represented here than you think, but that the nature of nonspecialist online discussion warps things away from the engineer mindset and towards the scientist mindset. Both types of people are present, but the engineer-mindset people tend not to put that part of themselves forward here.
The problem with getting down into the details is that there are many areas with messy details to get into, and it’s hard to appreciate the messy details of an area you haven’t spent enough time in. So deep dives in narrow topics wind up looking more like engineer-mindset, while shallow passes over wide areas wind up looking more like scientist-mindset. LessWrong posts can’t assume much background, which limits their depth.
I would be happy to see more deep-dives; a lightly edited transcript of John Carmack wouldn’t be a prototypical LessWrong post, but it would be a good one. But such posts are necessarily going to exclude a lot of readers, and LessWrong isn’t necessarily going to be competitive with posting in more topic-specialized places.
Yet I also feel like John Carmack probably probably isn’t remotely near the level of Pearl (I’m not that familiar Carmack’s work): pushing forward video game development doesn’t compare to neatly figuring what exactly causality itself is.
You’re looking at the wrong thing. Don’t look at the topic of their work; look at their cognitive style and overall generativity. Carmack is many levels above Pearl. Just as importantly, there’s enough recorded video of him speaking unscripted that it’s feasible to absorb some of his style.
I’m not sure relationship-strength on a single axis is quite the right factor. At the end of a workshop, the participants don’t have that much familiarity, if you measure it by hours spent talking; but those hours will tend to have been focused on the sort of information that makes a Doom circle work, ie, people’s life strategies and the things they’re struggling with. If I naively tried to gather a group with strong relationship-strength, I expect many of the people I invited would find out that they didn’t know each other as well as they thought they did.
A slightly different spin on this model: it’s not about the types of strategies people generate, but the number. If you think about something and only come up with one strategy, you’ll do it without hesitation; if you generate three strategies, you’ll pause to think about which is the right one. So people who can’t come up with as many strategies are impulsive.
Somewhat more meta level: Heuristically speaking, it seems wrong and dangerous for the answer to “which expressed human preferences are valid?” to be anything other than “all of them”. There’s a common pattern in metaethics which looks like:
1. People seem to have preference X
2. X is instrumentally valuable as a source of Y and Z. The instrumental-value relation explains how the preference for X was originally acquired.
3. [Fallacious] Therefore preference X can be ignored without losing value, so long as Y and Z are optimized.
In the human brain algorithm, if you optimize something instrumentally for awhile, you start to value it terminally. I think this is the source of a surprisingly large fraction of our values.
+1 for book-distillation, probably the most underappreciated and important type of post.
In theory you might, but in practice you can’t. Distraction-avoidant behavior favors things that you can get into quickly, on the order of seconds—things like checking for Facebook notifications, or starting a game which has a very fast load time. Most intellectual work has a spinup, while you recreate mental context, before it provides rewards, so distraction-avoidant behavior doesn’t choose it.
One way to look at this is, where is the variance coming from? Any particular forecasting question has implied sub-questions, which the predictor needs to divide their attention between. For example, given the question “How much value has this organization created?”, a predictor might spend their time comparing the organization to others in its reference class, or they might spend time modeling the judges and whether they tend to give numbers that are higher or lower.
Evaluation consistency is a way of reducing the amount of resources that you need to spend modeling the judges, by providing a standard that you can calibrate against. But there are other ways of achieving the same effect. For example, if you have people predict the ratio of value produced between two organizations, then if the judges consistently predict high or predict low, this no longer matters since it affects both equally.
Yep, I notice this sometimes when other people are doing it. I don’t notice myself doing it, but that’s probably because it’s easier to notice from the receiving end.
In writing, it makes me bounce off. (There are many posts competing for my attention, so if the first few sentences fail to say anything interesting, my brain assumes that your post is not competitive and moves on.) In speech, it makes me get frustrated with the speaker. If it’s in speech and it’s an interruption, that’s especially bad, because it’s displacing working memory from whatever I was doing before.
It’s not promoted as a first-class feature since most people don’t have enough time to read quite so many comments, and need more filtering, but some people requested it and use it, and the code-implementation is simple, so it won’t be going away.
The reason negatively-voted comments don’t appear is because it once shared code with the All Posts page, which has a checkbox for controlling that, but it doesn’t have a checkbox wired up. GitHub issue: https://github.com/LessWrong2/Lesswrong2/issues/2415 . Hiding negative-karma content used to be important because the most-recent content was often spam, and displaying it in between when it was posted and when the mods deleted it made for a bad experience; but we now have enough other anti-spam measures in place that this isn’t really a concern.
The way pagination is currently handled is something we inherited from our framework, which is pretty suboptimal. At some point we’re going to redo the way pagination is handled, not for allComments in particular but at a lower level which will affect multiple places, allComments included. This is likely to be awhile, though, since it’s a somewhat involved piece of development and there are more important things in the queue in front of it.