“Sam Harris and the Is–Ought Gap” might be one example.

# riceissa

Some of the math seems to be garbled, e.g. this page (compare with the version on obormot.net) and this page (compare with the version on obormot.net).

The scrape seems to be missing the Solomonoff induction dialogue, which was available at https://arbital.com/p/solomonoff_induction/?l=1hh (at the moment I just get an error).

ETA (2019-06-28): The new version of the scrape has this page, and can be viewed on the GreaterWrong version.

Going by public documents, it seems like the intention was to have a separate organization pretty early on (although I can’t say about the very beginning since I wasn’t involved).

The Articles of Incorporation for CFAR (called Feynman Foundation at the time) are dated July 18, 2011, and 501(c)(3) status was approved on July 26, 2011.

MIRI’s 2011 strategic plan (updated August 2011; unclear when it was first written) says “Encourage a new organization to begin rationality instruction similar to what Singularity Institute did in 2011 with Rationality Minicamp and Rationality Boot Camp.”

The Minicamp took place in May–June 2011, and it seems like the Boot Camp took place in June–August 2011, so it looks like by the time the first workshop finished and the second was in progress, the plan was already to start the new organization. However, it’s still possible that there was no plan for a separate organization before or during the first workshop.

MIRI’s December 2011 progress report also talks about plans to create the “Rationality Org”.

I also wrote a timeline of CFAR a while back, which has more links.

The examples in the buckets error post have “modus delens” as the correct response. To take the diet example from the post, A = “diet worth being on”, B = “zero studies suggesting health risks”. Adam has stored in his brain, and Betty presents , so Adam’s brain computes . The “protecting epistemology” move is to instead adamantly believe A (“I need to stay motivated!”) which ends up rejecting what Betty said. But the desired response is to instead deny B but also accept A, and hence to deny the implication .

So in these buckets error examples, modus ponens corresponds to the “automatic” reasoning, modus tollens corresponds to the “flinching away from the truth” move, and modus delens corresponds to the “rational” move that avoids the buckets error.

I explained this more in a comment on the post.

I can’t comment as to the relative frequency of this response and how often it’s correct (this sort of question seems difficult to answer).

Was this post ever published? (Looking at Zvi’s posts since January, I don’t see anything that looks like it.)

I’ve made around 250 Anki cards about AI safety. I haven’t prioritized sharing my cards because I think finding a specific card useful requires someone to have read the source material generating the card (e.g. if I made the card based on a blog post, one would need to read that exact blog post to get value out of reviewing the card; see learn before you memorize). Since there are many AI safety blog posts and I don’t have the sense that lots of Anki users read any particular blog post, it seems to me that the value generated from sharing a set of cards about a blog post isn’t high enough to overcome the annoyance cost of polishing, packaging, and uploading the cards.

More generally, from a consumer perspective, I think people tend to be pretty bad at making good Anki cards (I’m often embarrassed at the cards I’ve created several months ago!), which makes it unexciting for me to spend a lot of effort trying to collaborate with others on making cards (because I expect to receive poorly-made cards in return for the cards I provide). I think collaborative card-making

*can*be done though, e.g. Michael Nielsen and Andy Matuschak’s quantum computing guide comes with pre-made cards that I think are pretty good.Different people also have different goals/interests so even given a single source material, the specifics one wants to Ankify can be different. For example, someone who wants to understand the technical details of logical induction will want to Ankify the common objects used (market, pricing, trader, valuation feature, etc.), the theorems and proof techniques, and so forth, whereas someone who just wants a high-level overview and the “so what” of logical induction can get away with Ankifying much less detail.

Something I’ve noticed is that many AI safety posts aren’t very good at explaining things (not enough concrete examples, not enough emphasis on common misconceptions and edge cases, not enough effort to answer what I think of as “obvious” questions); this fact is often revealed by the comments people make in response to a post. This makes it hard to make Anki cards because one doesn’t really understand the content of the post, at least not well enough to confidently generate Anki cards (one of the benefits of being an Anki user is having a greater sensitivity to when one does not understand something; see “illusion of explanatory depth” and related terms). There are other problems like conflicting usage of terminology (e.g. multiple definitions of “benign”, “aligned”, “corrigible”) and the fact that some of the debates are ongoing/some of the knowledge is still being worked out.

For “What would be a good strategy for generating useful flashcards?”: I try to read a post or a series of posts and once I feel that I understand the basic idea, I will usually reread it to add cards about the basic terms and ask myself simple questions. Some example cards for iterated amplification:

what kind of training does the Distill step use?

in the pseudocode, what step gets repeated/iterated?

how do we get A[0]?

write A[1] in terms of H and A[0]

when Paul says IDA is going to be competitive with traditional RL agents in terms of time and resource costs, what exactly does he mean?

advantages of A[0] over H

symbolic expression for the overseer

why should the amplified system (of human + multiple copies of the AI) be expected to perform better than the human alone?

You might have in mind a Facebook post that Vipul Naik wrote in 2017.

We can model success as a combination of doing useful things and avoiding making mistakes. As a particular example, we can model intellectual success as a combination of coming up with good ideas and avoiding bad ideas. I claim that rationality helps us avoid mistakes and bad ideas, but doesn’t help much in generating good ideas and useful work.

Eliezer Yudkowsky has made similar points in e.g. “Unteachable Excellence” (“much of the most important information we can learn from history is about how to not lose, rather than how to win”, “It’s easier to

*avoid*duplicating spectacular failures than to duplicate spectacular successes. And it’s often easier to generalize failure between domains.”) and “Teaching the Unteachable”.

Thoughts on #10:

I am confused about this exercise. The standard/modern proof of Gödel’s second incompleteness theorem uses the Hilbert–Bernays–Löb derivability conditions, which are stated as (a), (b), (c) in exercise #11. If the exercises are meant to be solved in sequence, this seems to imply that #10 is solvable without using the derivability conditions. I tried doing this for a while without getting anywhere.

Maybe another way to state my confusion is that I’m pretty sure that up to exercise #10, nothing that distinguishes Peano arithmetic from Robinson arithmetic has been introduced (it is only with the introduction of the derivability conditions in #11 that this difference becomes apparent). It looks like there is a version of the second incompleteness theorem for Robinson arithmetic, but the paper says “The proof is by the construction of a nonstandard model in which this formula [i.e. formula expressing consistency] is false”, so I’m guessing this proof won’t work for Peano arithmetic.

My solution for #12:

Suppose for the sake of contradiction that such a formula exists. By the diagonal lemma applied to , there is some sentence such that, provably, . By the soundness of our theory, in fact . But by the property for we also have , which means , a contradiction.

This seems to be the “semantic” version of the theorem, where the property for is stated outside the system. There is also a “syntactic” version where the property for is stated within the system.

Attempted solution and some thoughts on #9:

Define a formula taking one free variable to be .

Now define to be . By the definition of we have .

We have

The first step follows by the definition of , the second by the definition of , the third by the definition of , and the fourth by the property of mentioned above. Since by the type signature of , this completes the proof.

Things I’m not sure about:

It’s a little unclear to me what the notation means. In particular, I’ve assumed that takes as inputs Gödel numbers of formulas rather than the formulas themselves. If takes as inputs the formulas themselves, then I don’t think we can assume that the formula exists without doing more arithmetization work (i.e. the equivalent of would need to know how to convert from the Gödel number of a formula to the formula itself).

If the biconditional “” is a connective in the logic itself, then I think the same proof works but we would need to assume more about than is given in the problem statement, namely that the theory we have can prove the substitution property of .

The assumption about the quantifier complexity of and was barely used. It was just given to us in the type signature for , and the same proof would have worked without this assumption, so I am confused about why the problem includes this assumption.

That link works, thanks!

The link no longer works (I get “This project has not yet been moved into the new version of Overleaf. You will need to log in and move it in order to continue working on it.”) Would you be willing to re-post it or move it so that it is visible?

Some other sources of exercises you might want to check out (that have solutions and that I have used at least partly):

Multiple choice quizzes (the ones related to linear algebra are determinants, elementary matrices, inner product spaces, linear algebra, linear systems, linear transformations, matrices, and vector spaces)

Vipul Naik’s quizzes (disclosure: I am friends with Vipul and also do contract work for him)

Regarding Axler’s book (since it has been mentioned in this thread): there are several “levels” of linear algebra, and Axler’s book is at a higher level (emphasis on abstract vector spaces and coordinate-free ways of doing things) than the 3Blue1Brown videos (more concrete, working in ). Axler’s book also assumes that the reader has had exposure to the lower level material (e.g. he does not talk about row reduction and elementary matrices). So I’m not sure I would recommend it to someone starting out trying to learn the basics of linear algebra.

Gratuitous remarks:

I think different resources covering material in a different order and using different terminology is in some sense a feature, not a bug, because it allows one to look at the subject from different perspectives. For instance, the “done right” in Axler’s book comes from one such change in perspective.

I find that learning mathematics well takes an unintuitively long time; it might be unrealistic to expect to learn the material well unless one puts in a lot of effort.

I think there is a case to be made for the importance of struggling in learning (disclosure: I am the author of the page).

Is there (or will there be) a way to see a list of the latest posts, restricted to posts that are questions? (I am wondering about this both in the GraphQL API and in the site UI.)

I think we are working off different editions. According to the errata, the condition for strict contraction was changed to for all distinct .

Can you say more about why exercise 17.6.3 is wrong?

If we define by then for distinct we have

We also have since

In general, the derivative is , which is continuous on .

There are some more data (post count, comment count, vote count, etc., but not pageviews) at “History of LessWrong: Some Data Graphics”.

I was surprised to see, both on your website and the white paper, that you are part of Mercatoria/ICTP (although your level of involvement isn’t clear based on public information). My surprise is mainly because you have a couple of comments on LessWrong that discuss why you have declined to join MIRI as a research associate. You have also (to my knowledge) never joined any other rationality-community or effective altruism-related organization in any capacity.

My questions are:

What are the reasons you decided to join or sign on as a co-author for Mercatoria/ICTP?

More generally, how do you decide which organizations to associate with? Have you considered joining other organizations, starting your own organization, or recruiting contract workers/volunteers to work on things you consider important?