Senior Research Scientist at NTT Research, Physics & Informatics Lab. jessriedel.com , jessriedel[at]gmail[dot]com
JessRiedel
+ risk of being locked in if your preferred provider changes,
The contract is transferable, so if Nectome becomes successful (many patients in the future) you presumably should be able to recover a large fraction of the contract value at that time.
(Obviously if the procedure becomes cheap then you won’t recover as much, but that’s inherent to the “pay far in advance for a discount on current prices” bet, regardless of your provider preferences changing.)
Thanks, this is specific and useful. I think it’s less that we’re attempting to target LW and more that it’s just how we tend to talk. We’ll work on keeping the word choice more conventional and professional.
I’m interested to hear more. Would it mostly be for practical reasons (financial sustainability), or to reduce the submission of bad work that wastes editor/reviewer time?
We’re been discussing scope a lot, and this is indeed a big question. Some considerations:
We can only do a good job reviewing papers in a given field if we’ve got a good editor in that field. So scope will be constrained by the practical question of who we are able to get.
Justified or not, it’s a bit perilous for a new journal in mathematical/technical fields to also publish papers in fields with less objective criteria, especially in a new field like alignment with contentious boundaries. Even a slight perception of softness could hurt us in the beginning.
Of course, even in pure math importance is a subjective criteria, but perception doesn’t necessarily track this. And some sorts of philosophy (formal logic) can have pretty objective standards, but I think this is not the sort of philosophy you’re interested in.
It’s possible this can be mitigated with journal sectioning (e.g., Alignment: Mathematical, Alignment: Empirical, Alignment: Philosophical, etc.), but it’s dicey and hard to do right, especially when the journal is new and not yet established.
Regarding scope, we always need to ask: Would this sort of work benefit from review? Could reviewers meaningfully improve the work? Could we establish a reputation where publication (or the contents of the reviewer abstract) was a useful, credible signal to other researchers?
There’s no point in starting a journal if we exclude the sort of work that actually matters.
Incidentally, if someone wanted to help make the case for philosophy in the journal, a very useful thing would be to compile a list of papers (which could be a mix of published in traditional journals and not, and need not be strictly on alignment) to serve as exemplars of what should be included.
Yea, I can definitely see the selective reporting problem, which goes beyond the problem of negative results being unfairly denied publication. But to combat selective reporting, you’d really need to require preregistered experiments, which is more of a collective-action problem between journals, since if any of them allow un-preregistered experiments, the authors can just publish there. (Of course, you can try and convince the broad community to ignore all experiments that aren’t preregistered, but if you can do this then you’ve already won; the journals will be strongly incentivized to follow suit.)
Required preregistration is just very cumbersome and difficult to do for exploratory science; it really seems only feasible for the later stages of things like medical trials or big contentious question requiring a decisive experiment.
Sounds like a lot more risk of bias (and appearance thereof) than it’s worth. At the least, I figure you’d need to have a disclosure on every paper authored by an employee of the company, as well as conflict-of-interest rules making sure the action editor and reviewers were un-biased. Would be a pain, and still suspect. (Here’s GPT’s summary of how existing journals handle this, most commonly in medical research: https://chatgpt.com/share/69a992c3-75d8-8002-a592-a8053ee1cdbe )
An intermediate and more plausible case would be personal donations from a former or current employee of a frontier company; we expect many to be philanthropically motivated in the coming years. Imo, this is something we’d consider, but I haven’t thought about it much yet. We’re set for funding for the first year.
If we are successful in standing up a good and well-respected journal, I expect there will be many funders interested in supporting us. (And if we’re not successful, the issue is moot.) So I’m not too worried about getting backed into a corner where our only option to keep running is money from a potentially biasing source. We’d ideally like a broad diverse base of funders, like the arXiv.
An example, for what it’s worth: Quantum journal is relatively new physics arXiv-overlay journal (10 years old) that runs on volunteers effort and modest publication fees (~$700). They didn’t want the fees to be a barrier to submitting, so they have a very easy process for getting the fees waived; you basically just have to ask. My understanding is that they still have not been overrun with slop, and whenever I am asked to review the papers are reasonable quality. So it does not seem they are foisting the slop handling onto reviewers. Desk-rejection by the editors appears to be enough.
Hmm. Ultimately it would be up to the editorial board, but here’s why I personally think these features are probably low priority given their nontrivial cost: (1) I presume we are talking about numerical experiments, and I expect the foundational/conceptual topics we want to publish on are less vulnerable to publication bias than, say, experimental psychology or economics. It would be more like pre-registering numerical math papers. That said, if you think the alignment literature has big problems with publication bias, I’d be interested to hear more. (2) Our primary audience is other researchers. Often, journals are motivated to provide press abstracts to induce popular coverage (by making a time-pressed journalist’s life easier, as with a press release), and increasing popular coverage is not one of our goals. It can also be a corrupting influence (although there are steps that we could take to reduce this). High quality popular-science journalist will generally take the time to talk to the authors and outside researchers to get the story right.
It depends on few factors, but April at the earliest for initial submissions. Publication will almost certainly be on a rolling basis (no discrete issues). Our ambitious goal is to drive the submission-to-publication time down to something like a month, but it will require combining several new tricks so it won’t be that fast at the beginning.
I’d also be surprised.
No, vacuum decay generally expands at sub-light speed.
Vacuum decay is fast but not instant, and there will almost certainly be branches where it maims you and then reverses. Likewise, you can make suicide machines very reliable and fast. It’s unreasonable to think any of these mechanical details matter.
This work was co-authored by Jordan Stone, Darryl Wright, and Youssef Saleh, whose names appear on the EA Forum post but not on this cross post to LW.
(Self-promotion warning.) Alexander Gietelink Oldenziel pointed me toward this post after hearing me describe my physics research and noticing some potential similarities, especially with the Redundant Information Hypothesis. If you’ll forgive me, I’d like to point to a few ideas in my field (many not associated with me!) that might be useful. Sorry in advance if these connections end up being too tenuous.
In short, I work on mathematically formalizing the intuitive idea of wavefunction branches, and a big part of my approach is based on finding variables that are special because they are redundantly recorded in many spatially disjoint systems. The redundancy aspects are inspired by some of the work done by Wojciech Zurek (my advisor) and collaborators on quantum Darwinism. (Don’t read too much into the name; it’s all about redundancy, not mutation.) Although I personally have concentrated on using redundancy to identify quantum variables that behave classically without necessarily being of interest to cognitive systems, the importance of redundancy for intuitively establishing “objectivity” among intelligent beings is a big motivation for Zurek.Building on work by Brandao et al., Xiao-Liang Qi & Dan Ranard made use of the idea of “quantum Markov blankets” in formalizing certain aspects of quantum Darwinism. I think these are playing a very similar role to the (classical) Markov blankets discussed above.
In the section “Definitions depend on choice of variables” of the current post, the authors argue that Wentworth’s construction depends on a choice of variables, and that without a preferred choice it’s not clear that the ideas are robust. So it’s maybe worth noting that a similar issue arises in the definition of wavefunction branches. The approach several researchers (including me) have been taking is to ground the preferred variables in spatial locality, which is about as fundamental a constraint as you can get in physics. More specifically, the idea is that the wavefunction branche decomposition should be invariant under arbitrary local operations (“unitaries”) on each patch of space, but not invariant under operations that mix up different spatial regions.
Another basic physics idea that might be relevant is hydrodynamic variables and the relevant transport phenomena. Indeed, Wentworth brings up several special cases (e.g., temperature, center-of-mass momentum, pressure), and he correctly notes that their important role can be traced back to their local conservation (in time, not just under re-sampling). However, while very-non-exhaustively browsing through his other posts on LW it seemed as if he didn’t bring up what is often considered their most important practical feature: predictability. Basically, the idea is this: Out of the set of all possible variables one might use to describe a system, most of them cannot be used on their own to reliably predict forward time evolution because they depend on the many other variables in a non-Markovian way. But hydro variables have closed equations of motion, which can be deterministic or stochastic but at the least are Markovian. Furthermore, the rest of the variables in the system (i.e., all the chaotic microscopic degrees of freedom) are usually “as random as possible”—and therefore unnecessary to simulate—in the sense that it’s infeasible to distinguish them from being in equilibrium (subject, of course, to the constraints implied by the values of the conserved quantities). This formalism is very broad, extending well beyond fluid dynamics despite the name “hydro”.
Further, assume that mediates between and (third diagram below).
I can’t tell if X is supposed to be another variable, distinct from X_1 and X_2, or if it’s suppose to be X=(X_1,X_2), or what. EDIT: From reading further it looks like X=(X_1,X_2). This should be clarified where the variables are first introduced. Just to make it clear that this is not obvious even just within the field of Bayes nets, I open up Pearl’s “Causality” to page 17 and see “In Figure 1.2, X={X_2} and Y={X_3} are d-separated by Z={X_1}”, i.e. X is not assumed to be a vector (X_1, X_2, …). And obviously there is more variability in other fields.
Other examples:
-
“Career politician” is something of a slur. It seems widely accepted (though maybe you dispute?) that folks who specialize in politics certainly become better at winning politics (“more effective”) but that also this selects for politicians who are less honest or otherwise not well aligned with their constituents.
-
Tech startups still led by their technical CEO are somehow better than those where they have been replaced with a “career CEO”. Obviously there are selection effects, but the career CEOs are generally believed to be more short-term- and power-focused.
People have tried to fix these problems by putting constraints on managers (either through norms/stigmas about “non-technical” managers or explicit requirements that managers must, e.g., have a PhD). And probably these have helped some (although they tend to get Goodhardted, e.g., people who get MDs in order to run medical companies without any desire to practice medicine). And certainly there are times when technical people are bad managers and do more damage than their knowledge can possibly make up for.
But like, this tension between technical knowledge and specializing in management (or grant evaluation) seems like the crux of the issue that must be addressed head-on in any theorizing about the problem.
-
Note that I’m specifically not referring to the elements of as “actions” or “outputs”; rather, the elements of are possible ways the agent can choose to be.
I don’t know what distinction is being drawn here. You probably need an example to illustrate.
Once you eliminate the requirement that the manager be a practicing scientist, the roles will become filled with people who like managing, and are good at politics, rather than doing science. I’m surprised this is controversial. There is a reason the chair of academic departments is almost always a rotating prof in the department, rather than a permanent administrator. (Note: “was once a professor” is not considered sufficient to prevent this. Rather, profs understand that serving as chair for a couple years before rotating back into research is an unpleasant but necessary duty.)
We see this with doctors too. As the US medical system consolidates, and private practices are squeezed to a tiny and tinier fraction of docs, slowly but surely all the docs become employees of hospitals and the people in charge are MBA-types. Some of them have MDs, and once practiced medicine, but they specialize in management and they don’t come back.
You can of course argue that the downside is worth the benefits. But the existence and size of the downside are pretty clear from history, and need to be addressed in such a system.
Letting people specialize as “science managers” sounds in practice like transferring the reins from scientists to MBAs, as was much maligned at Boeing. Similarly, having grants distributed by people who aren’t practicing scientists sounds like a great way to avoid professional financial retaliation and replace it with politicians setting the direction of funding.
To be clear, we mean that in the short-term we expect to be able to desk-reject low-quality submissions by hand, whether AI-generated or otherwise. We never want to publish it, and we expect to mostly spare reviewers having to read it. The open question is how quickly we will need to develop automated tools to maintain these standard without putting undue burden on our editors.