Here are two self-contained algorithmic questions that have come up in our research. We’re offering a bounty of $5k for a solution to either of them—either an algorithm, or a lower bound under any hardness assumption that has appeared in the literature.

**Question 1 (existence of PSD completions)**: given entries of an matrix, including the diagonal, can we tell in time whether it has any (real, symmetric) positive semidefinite completion? Proving that this task is at least as hard as dense matrix multiplication or PSD testing would count as a resolution.

**Question 2 (fast “approximate squaring”):** given and a set of entries of , can I find *some* PSD matrix that agrees with in those m entries in time ?

We’ll pay $5k for a solution to either problem. The offer is open for each problem for 3 months or until the problem gets solved (whichever happens first). Winners are welcome to publish solutions independently. Otherwise, if the result ends up being a significant part of a paper, we’ll invite them to be a coauthor.

We’ll also consider smaller prizes for partial progress, or anything that we find helpful for either solving the problem or realizing we should give up on it.

To understand the motivation for these questions, you can read our paper on Formalizing the presumption of independence and in particular Appendix D.7.2. ARC is trying to find efficient heuristic estimators as a formalization of defeasible reasoning about quantities like the variance of a neural network’s output. These two questions are very closely related to one of the simplest cases where we haven’t yet found any reasonable linear time heuristic estimator.

We don’t expect to receive many incorrect proposals, but if we receive more than 5 we may start applying a higher standard in order to save our time. If we can’t understand a solution quickly, we may ask you to provide more details, and if we still can’t understand it we may reject it. We expect a correct solution to be about as clear and easy to verify as a paper published at STOC.

For both problems, it’s OK if we incorrectly treat a matrix as PSD as long as all of its eigenvalues are at least for a small constant . hides polylogarithmic factors in , , and the max matrix entry. Feel free to ask for other clarifications on our question on Math Overflow, on Facebook, or by email.

To submit a solution, send an email to prize@alignment.org.

I made a video going through the solution in the easiest case of 3x3 matrices. This might help people trying to understand the problem and/or get people interested https://youtu.be/0d66hPBC38Y

I think logarithmic dependence on epsilon will be difficult, especially for problem one. To demonstrate this, consider the problem of approximating the maximum eigenvalue of a sparse (let’s say m = O(n)) PSD matrix with trace at most n. The obvious algorithms to try aren’t fast enough:

power iteration / Lanczos algorithm compute Mkv in time O(mk) using sparse matrix-vector multiplies with polynomial precision in k. Thus, such methods exhibit the undesirable polynomial dependence in epsilon.

repeated squaring computes M2k in time ω(n2k) using dense matrix multiplication with exponential precision in k. Though such methods are logarithmic in k, the dense matrix multiplication steps render the algorithm too slow.

Intuitively, power iteration-style methods exploit sparsity by taking matrix-vector products exclusively, which yields slow (polynomial in epsilon) convergence.

To argue that problem one is difficult, I claim that one natural candidate completion is the all-zero completion, resulting in a sparse matrix. This suggests (but does not prove!) that the problem of determining existence of PSD completions requires determining whether the resulting sparse completion is PSD—equivalently, whether its minimum eigenvalue is nonnegative. The reduction to the max eigenvalue problem then follows by taking spectral shifts (M→aI+bM for a,b∈R).

It’s unclear to me whether problem two has a similar dependence, but given the similarity of the problems I’m skeptical that logarithmic dependence in epsilon is possible. If anyone has ideas for a similar reduction I’d love to discuss!

(comment based on work co-authored w @Somsubhro Bagchi)

note: Som and I have discussed a similar objection with @paulfchristiano, who seemed ok with solutions with convergence polynomial in epsilon but still O(mn) in the “typical case” (eg. for problem 1, with enough precision to determine (non)existence of PSD completions). After constraining all diagonal elements of A to be 1, we’re somewhat optimistic such weaker convergence is attainable.

Question 1 is very likely NP-hard. https://arxiv.org/abs/1203.6602 shows:

I’m still stretching my graph-theory muscles to see if this technically holds for k=∞ (and so precisely implies the NP-hardness of Question 1.) But even without that, for applied use cases we can just fix k to a very large number to see that practical algorithms are unlikely.

Nope, I’m withdrawing this answer. I looked closer into the proof and I think it’s only meaningful asymptotically, in a low rank regime. The technique doesn’t work for analyzing full-rank completions.

It can be decided in O(nω√m) time by semidefinite programming, and I’d guess that most researchers would expect it can be done in time O(nω) (which is the runtime of PSD testing, i.e. the case when m=n2).

More generally, there’s a large literature on positive semidefinite completions which is mostly concerned with finding completions with particular properties. Sorry for not better flagging the state of this problem. Those techniques don’t seem super promising for getting to O(mn).

I think it can be done in O(nω), where I recall for non-expert’s convenience that ω is the exponent of matrix multiplication / inverse / PSD testing / etc. (all are identical). Let Mn be the space of n×n matrices and let V be the (2m−n)-dimensional vector space of matrices with zeros in all non-specified entries of the problem. The maximum-determinant completion is the (only?) one whose inverse is in V. Consider the map V→Mn,B↦B−1 and its projection f:V→V where we zero out all of the other entries. The function f can be evaluated in time O(nω). We wish to solve f(B)=A. This should be doable using a Picard or Newton iteration, with a number of steps that depends on the desired precision.

Would it be useful if I try to spell this out more precisely? Of course, this would not be enough to reach O(mn) in the small m case. Side-note: The drawback of having posted this question in multiple places at the same time is that the discussion is fragmented. I could move the comment to mathoverflow if you think it is better.

Agree about the downside of fragmented discussion, I left some time for MO discussion before posting the bounty but I don’t know how much it helped.

I agree that some approach like that seems like it could work for O(nω). I’m not familiar with those techniques and so don’t know a technique that needs O(logϵ) iterations rather than O(√m) or O(ϵ−1). For Newton iteration it seems like you need some kind of bound on the condition number.

But I haven’t thought about it much, since it seems like something fundamentally different is needed to break O(nω). I believe that one of those might just work out of the box.

Wait which algorithm for semidefinite programming are you using? The ones I’ve seen look like they should translate to a runtime even slower than O(nω√m). For example the one here:

https://arxiv.org/pdf/2009.10217.pdf

Also, do you have a source for the runtime of PSD testing being nω? I assume no lower bound is known, i.e. I doubt PSD testing is hard for matrix multiplication. Am I wrong about that?

That’s a good question. From what I’ve seen, PSD testing can be done by trying to make a Cholesky decomposition (writing the matrix as LL∗ with L lower-triangular) and seeing if it fails. The Cholesky decomposition is an LU decomposition in which the lower-triangular L and upper-triangular U are simply taken to have the same diagonal entries, so PSD testing should have the same complexity as LU decomposition. Wikipedia quotes Bunch and Hopcroft 1974 who show that LU decomposition can be done in O(nlog27) by Strassen, and presumably the more modern matrix multiplication algorithms also give an improvement for LU.

I also doubt that PSD testing is hard for matrix multiplication, even though you can get farther than you’d think. Given a positive-definite matrix A whose inverse we are interested in, consider the 2n×2n block matrix (AididC). It is positive-definite if and only if all principal minors are positive. The minors that are minors of A are positive by assumption, and the bigger minors are equal to det(A) times minors of C−A−1, so altogether the big matrix is positive-definite iff C−A−1 is. Continuing in this direction, we can get in O(PSD) time (times logϵ) any specific component of A−1. This is not enough at all to get the full inverse.

There is a simple intuition for why PSD testing cannot be hard for matrix multiplication or inversion: regardless of how you do it and what matrix you apply it to, it only gives you one bit of information. Getting even just one bit of information about each matrix element of the result requires n2 applications of PSD testing. The only way out would be if one only needed to apply PSD testing to tiny matrices.

Thanks for the clarification! I assumed the question was using the bit computation model (biased by my own experience), in which case the complexity of SDP in the general case still seems to be unknown (https://math.stackexchange.com/questions/2619096/can-every-semidefinite-program-be-solved-in-polynomial-time)

Yeah, I can believe the problem is NP hard if you need to distinguish ±ε eigenvalues in time O(1) rather than O(logϵ). I don’t understand exact arithmetic very well but it seems wild (e.g my understanding is that no polynomial time algorithm is known for testing 3√n1+3√n2+…+3√nk>0).

We intend to leave this prize open until the end of September. At that point we will distribute prizes (probably just small prizes for useful arguments and algorithms, but no full solution).

I now pretty strongly suspect that the version of problem 1 with logarithmic dependence on ε is not solvable. We would award a prize for an algorithm running in time O(mnε−1) which can distinguish matrices with no PSD completion from those with a completion where the ratio of min to max eigenvalue is at least ε. And of course a lower bound is still fair game.

That said, I don’t expect any new submissions to win prizes and so wouldn’t recommend that anyone start working on it.

Are the math formulas (= presumably LaTeX) in this post broken for everyone, or just for me? (That is, I see strings full of backslashes, like “ε”).

Yeah, looks like it broke this morning, and I just fixed it again. Having RSS imports with LaTeX and non-admin editors is a bit fickle at the moment (since we have to sanitize the HTML to avoid security vulnerabilities).

Sorry to create headache for you guys, thanks for handling imports!

No worries! Sorry for the fickleness. I just made a PR to handle non-admin editors more robustly.

Oh, yeah, this happens when switching back and forth from the LessWrong editor to Markdown and vice-versa. Love if someone fixed it, also because in LessWrong editor I don’t know how or if you can make block equations—just inline ones.

Command + M for block equations (it says so in the help text that shows up in an empty LW editor text box).

And huh, I am surprised that LaTeX doesn’t translate correctly. I feel like I’ve used that recently. Can you ping us on Intercom with a context where it didn’t work?

Gah, I assumed this was some notation I’d never seen before. (Then again, even if this was written out in clear LaTeX I wouldn’t understand it.)

Asking for some clarifications:

1. For both problems, should the solution work for an adversarially chosen set of m entries?

2. For both problems, can we read more entries of the matrix if it helps our solution? In particular can we WLOG assume we know the diagonal entries in case that helps in some way.

Yes, the algorithm should work for adversarial inputs.

For the second problem you can effectively read more entries since you can compute any given entry of AAT in time O(n). For the first problem you can’t read more entries. But you can assume WLOG that you have the diagonal—if you don’t have any given diagonal entry, you might as well just set it to infinity in the completion, effectively removing that whole row and column from the problem. So we can just remove all rows and columns for which you don’t have the diagonal, and be left with a matrix where all diagonal entries are known.

I’m not sure this is true. Consider the 2x2 matrix ((?, 1),(1,0)). Removing the first row and the first column leaves you with ((0)), which is a PSD 1x1 matrix. That being said, there is no value of ? for which the 2x2 matrix is PSD.

I agree with your example. I think you can remove the row/column if there are no known 0s on the diagonal, and I also think you can first WLOG remove any rows and columns with 0s on the diagonal (since if there are any other known non-zero entries in their row or column then the matrix has no PSD completions).

I’m also happy to just assume the diagonal entries are known. I think these are pretty weird corner cases.

It’s not quite this simple, the same issue arises if every PSD completion of the known-diagonal minor has zero determinant (e.g. ((?, 1, 2), (1, 1, 1), (2, 1, 1))). But I think in that case making the remaining diagonal entries large enough still makes the eigenvalues at least −ε, which is good enough.

What matters is not whether or not there is another 0 on the diagonal, but whether or not there is another PSD non definite matrix on the diagonal. For example, in the comment from

Jacob_Hilton,they introduce the 2x2 matrix ((1,1),(1,1)), with eigenvalues [0,1], which is a PSD non definite.I agree with assuming that all diagonal entries are known. You can even assume that all entries are 1 on the diagonal WLOG.

I changed the statement to assume the diagonal is known, I agree this way is cleaner and prevents us from having to think about weird corner cases.

Just to spell it out, am I right in thinking that you can scale all the matrix elements so that each entry on the diagonal is the sign of its original value, i.e. −1 or 1, but if the resulting diagonal contains any −1s then it can’t be PSD?

Yes indeed. One definition of a PSD matrix is some matrix M such that, for any vector X, XTMX≥0 (so, it defines some kind of scalar product).

If Mi,i=y, then you can always divide the whole row/column by √y, which is equivalent to applying some scaling, this won’t change the fact that M is PSD or not.

If Mi,i=−1, then if you try the vector X:∀j≠i,Xj=0;Xi=1, you can check that XTMX=−1, thus the matrix isn’t PSD.

Love to see a well-defined, open mathematical problem whose solution could help make some progress on AI alignment! It’s like a little taste of not being a pre-paradigmic field. Maybe someday, we’ll have lots of problems like this that can engage the broader math/CS community, that don’t involve so much vague speculation and philosophy :)

I understand this bounty is roughly closed, but the page is still listed as bounty active and my attempt took me only a couple hours.

Problem 2 is solvable when there are simple restrictions on the entries chosen.

Restriction 1: No diagonal entries need to be correct. Solution: Let C = AAT For each entry c_ij we construct the matrix with only c_ii, c_ij, c_ji , c_jj and all else zero. Clearly this matrix is constructible in O(n) work. So all entries can be constructed in this way. Note if c_ij and c_ji are requested, only construct one matrix for the pair. Repeating this for all m does O(nm) work. We then sum all such matricies to get our estimate E. Only the diagonal will possibly need summing so O(nm) work is done. Clearly each constructed matrix is positive semidefinite as it is a principal submatrix of C. The sum is also positive semidefinite because the individual matricies are positive semidefinite.

Restriction 2: Only 1 entry per row-column number is needed other than the main diagonal. The above solution still works.

This comment proposes a strategy for (2) I’ll call

Active Matrix Completion (AMC), which allows querying of elements of AAT given some cost/budget. In our case, each query takes O(n), so we can afford O(m) total queries.Some notes:

O(n) seems really fast—for instance, evaluating the bilinear form uT(AAT)v=(Au)T(Av) takes O(n^2). This makes me suspect any solution to (2) will rely on some form of AMC.

AMC exists in literature—this paper gives a nice introduction and (I think) proves that O(n*rank(A)) queries suffice.

One strategy that seems intuitively appealing is querying enough elements to guarantee that the resulting adjacency graph is chordal, which (I think) allows for efficient PSD completion (see Vandenberghe; algorithm 10.2). Thus, the following proposition is stronger than (2):

PROPOSITION: given a graph G = (V, E) with n nodes and m edges, can we efficiently (in O(mn) time) determine a small (with at most Cm edges for universal constant C) chordal completion of G?

The proposition seems “too good to be true”, especially since chordal completion seems like a hard computational problem (NP-complete, no great approximations are known), so I’m most interested in easy counterexamples. For instance, a “small” graph with only “large” chordal completions would confirm the falsehood of the proposition.

So, uh, any graph theory experts around?

I’m currently working on this problem and feel like I’m making headway. Wondering if the bounty is still active?

The bounty is still active. (I work at ARC)

I don’t see any recent publications for paul christiano related to this. So i guess the problem[s] is still open.

(possibly trivial) observation: the generalization replacing “entries” with “linear constraints” seems almost as easy? I’d strongly suspect impossibility of the former implies impossibility of the latter (thus showing equivalence)

In Question 1, can I consider that O(n^3)=O(nm) since m=Ω(n) can’t be greater than n^2?

(I’m not the OP but) absolutely not. The problem is about incomplete matrices, and the idea is to get an upper bound linked to m, i.e. an upper bound linked to how incomplete the matrix is. If you rephrase the question 1 as O(n^3), this is a completely different question, because now you only care about how big the matrix is, not how complete it is.

Also, since question 1 can be achieved with some gaussian pivot in O(n^3), and it also implies being able to tell whether a matrix is PSD or not, I think the best known complexity wrt n is indeed O(n^3).

re: (1), you might be able to phrase the problem as convex optimization in the unknown parameters, which is probably \tilde{O}(n^2)

better: the problem can be phrased as a feasibility problem in semidefinite programming given m linear constraints; no idea if there are (randomized?) algorithms which run fast enough

As a side note, GPT-4 correctly identifies both problems as open:

I don’t think that anyone has ever mentioned problem #2 in the literature and so I would at least consider that answer somewhat misleading.

I strongly suspect that if we put in a similar-sounding but very easy problem GPT-4 would be just as likely to say that it’s an open question.

Initially, it has provided a solution for the first problem. I asked it: “please check your answer for any errors”, and then it apologized for the erroneous solution, and made the aforementioned assessment that the problem is still open.

With the second problem, it was a somewhat similar story: it answered with a partial solution involving a Cholesky decomposition, and then clarified that the problem is open.

Seems to be worth talking with GPT-4 on the matter. Perhaps it could give some useful hints.

When I tried asking about this problem, and referenced the above comment about being able to assume that all entries on the diagonal are positive, GPT-4 said the following:

Is GPT-4 correct? If so, am I interpreting it correctly that this problem could be rephrased as “given an incomplete set of m desired pairwise distances between n points, determine whether there exists some configuration of those points in an n-dimensional space such that the pairwise distances between those points are exactly as desired”?

Since we are only given m distances maybe a numerical algorithm for MDS might work. For example we might start with a random nxn matrix and minimize the strain by gradient descent. For the desired runtime this would just have to converge in $O(m/n)$ steps. I’m not sure if this is realistic though. Also there might be better numerical algorithms for this specific problem. Has anyone looked into this yet?

I think running a single strain minimization iteration on

`m`

points in`n`

dimensions takes O(m*n) steps. So there would need to be some reason to expect that it would converge in some constant (though possibly large) number of steps.Unless you’re saying “for each node, run the strain minimization step until it converges, then do the same for each subsequent node”. I don’t know if the greedy algorithm

worksthere, but if it does then maybe?Also I kinda expect that if there’s something that works in

`O(n*m*log(m))`

that’s probably fine.(and yeah, “try the greedy exact solution for each node” was my “dumb solution” attempt).

I think your interpretation is correct, I’m not entirely sure. There might be n+1 points in total, because the diagonal coefficients give you the distance to 0 I think?

The Gram matrix bit is certainly correct, though I’m not sure how the distances come into it. The m constraints are constraints on

dot products, so more angles (well, cosines) than distances between vectors.If you are given the dot products exactly, that’s equivalent to being given the distances exactly.

(I think this reduction goes the wrong way though, if I wanted to solve the problem about distances I’d translate it back into dot products.)

I think you can convert between the two representations in

`O(m)`

time, which would mean that any algorithm that solves either version in`O(n*m)`

solves both in`O(n*m)`

.Do you have some large positive and negative examples of the kinds of sparse matrix you’re trying to check for the existence of a PSD completion on, or alternatively a method for generating such examples with a known ground truth? I have a

really dumbidea for a possible algorithm here (that shamelessly exploits theexact shape of this problemin a way that probably doesn’t generalize to being useful for broader problems like MDS) that I think would complete in approximately the time constraints you’re looking for. It almost certainly won’t work, but I think it’s at least worth an hour of my time tocheckand figure out why (especially since I’m trying to improve my linear algebra skillsanyway).Edit: there’s the obvious approach, which I’m trying, of “start with only 1s on the diagonal and then keep adding random entries until it no longer has a PSD completion, then removing random entries until it does, and repeat to build a test set” but I doubt that covers the interesting corners of the problem space.

Edit 2: the

reallydumb thing does not work. I think I haven’t ruled out that a slightly less dumb approach could work though?Edit 3: never mind, my really dumb “solution” requires inverting a matrix that is, in the worst case, nxn, if e.g. you have an input that looks like

you’ll have to invert 6 2x2 matrices and one each of 3x3 to 7x7 matrices.

You mean as in, if I have two points xi and xj, then (xi−xj)2=x2i+x2j−2xi⋅xj?

Guess so, but yeah, I just don’t tend to think of it immediately as a part of the distances rather than just vectors. It works though since you mentioned the diagonal elements are known.