# riceissa

Karma: 203 (LW), 0 (AF)

NewTop

Is there (or will there be) a way to see a list of the latest posts, restricted to posts that are questions? (I am wondering about this both in the GraphQL API and in the site UI.)

I think we are working off different editions. According to the errata, the condition for strict contraction was changed to for all distinct .

Can you say more about why exercise 17.6.3 is wrong?

If we define by then for distinct we have

We also have since

In general, the derivative is , which is continuous on .

There are some more data (post count, comment count, vote count, etc., but not pageviews) at “History of LessWrong: Some Data Graphics”.

My solution for #3:

Define by . We know that is continuous because and the identity map both are, and by the limit laws. Applying the intermediate value theorem (problem #2) we see that there exists such that . But this means , so we are done.

Counterexample for the open interval: consider defined by . First, we can verify that if then , so indeed maps to . To see that there is no fixed point, note that the only solution to in is , which is not in . (We can also view this graphically by plotting both and and checking that they do not intersect in .)

Here is my attempt, based on Hoagy’s proof.

Let be an integer. We are given that and . Now consider the points in the interval . By 1-D Sperner’s lemma, there are an odd number of such that and (i.e. an odd number of “segments” that begin below zero and end up above zero). In particular, is an even number, so there must be at least one such number . Choose the smallest and call this number .

Now consider the sequence . Since this sequence takes values in , it is bounded, and by the Bolzano–Weierstrass theorem there must be some subsequence that converges to some number .

Consider the sequences and . We have for each . By the limit laws, as . Since is continuous, we have and as . Thus and , showing that , as desired.

I’m having trouble understanding why we can’t just fix in your proof. Then at each iteration we bisect the interval, so we wouldn’t be using the “full power” of the 1-D Sperner’s lemma (we would just be using something close to the base case).

Also if we are only given that is continuous, does it make sense to talk about the gradient?

I had a similar thought while reading this post, but I’m not sure invoking causality is necessary (having a direction still seems necessary). Just in terms of propositional logic, I would explain this post as follows:

1. Initially, one has the implication stored in one’s mind.

2. Someone asserts .

3. Now one’s mind (perhaps subconsciously) does a modus ponens, and obtains .

4. However, is an undesirable belief, so one wants to deny it.

5. Instead of rejecting the implication , one adamantly denies .

The “buckets error” is the implication , and “flinching away” is the denial of . Flinching away is about protecting one’s epistemology because denying is still better than accepting . Of course, it would be best to reject the implication , but since one can’t do this (by assumption, one makes the buckets error), it is preferable to “flinch away” from .

ETA (2019-02-01): It occurred to me that this is basically the same thing as “one man’s modus ponens is another man’s modus tollens” (see e.g. this post) but with some extra emotional connotations.

I was confused about this too, but now I think I have some idea of what’s going on.

Normally probability is defined for events, but expected value is defined for

*random variables*, not events. What is happening in this post is that we are taking the expected value of events, by way of the conditional expected value of the random variable (conditioning on the event). In symbols, if is some event in our sample space, we are saying , where is some random variable (this random variable is supposed to be clear from the context, so it doesn’t appear on the left hand side of the equation).Going back to cousin_it’s lottery example, we can formalize this as follows. The sample space can be and the probability measure is defined as and . The random variable represents the lottery, and it is defined by and .

Now we can calculate. The expected value of the lottery is:

The expected value of winning is:

The “probutility” of winning is:

So in this case, the “probutility” of winning is the same as the expected value

*of the lottery*. However, this is only the case because the situation is so simple. In particular, if was not equal to zero (while winning and losing remained exclusive events), then the two would have been different (the expected value of the lottery would have changed while the “probutility” would have remained the same).

I don’t see reference number 17 (“Personal correspondence with Carl Shulman”) used in the body of the post. What information from that reference is used in the post?

do we have any statistics about it?

For sessions and pageviews from Google Analytics, I wrote a post about it in April 2017. Since you mention scraping, perhaps you mean something like post and comment counts; if so, I’m not aware of any statistics about that.

Wei Dai has a web service to retrieve all posts and comments of particular users that I find useful (not sure if you will find it useful for gathering statistics, but I thought I would mention it just in case).

Based on descriptions on the FHI website, it looks like Kyle Scott filled this role, from July 2015 to September 2017.

From the earliest snapshot of his FHI bio page:

Kyle brings over 5 years of operations experience to the Future of Humanity Instutute. He keeps daily operations running smoothly, and manages incoming and outgoing requests for Prof. Nick Bostrom.

Strategically, he works to improve the processes and capacity of the office and free up the attention and time of Prof. Nick Bostrom.

Kyle came to the Future of Humanity Institute from the Effective Altruism movement, determining that this job position would be his most effective contribution to society. Learn more about Effective Altruism here.

The page is still up but it doesn’t look like he holds the position anymore.

He seems to be a project manager at BERI now:

Kyle manages various projects supporting BERI’s partner institutions. He graduated Whitman College with a B.A. in Philosophy. He spent two years working in career services and subsequently moved to Oxford where he worked for 80,000 Hours, the Centre for Effective Altruism and most recently at the Future of Humanity Institute as Nick Bostrom’s Executive Assistant.

On November 13, 2017 FHI opened the position for applications.

ETA: Louis Francini comes to the same conclusion on Quora. (Context: I asked the question on Quora, figured out the answer, posted this comment, then Louis answered my question.)

In some recent comments over at the Effective Altruism Forum you talk about anti-realism about consciousness, saying in particular “the case for accepting anti-realism as the answer to the problem of consciousness seems pretty weak, at least as explained by Brian”. I am wondering if you could elaborate more on this. Does the case for anti-realism about consciousness seem weak because of your general uncertainty on questions like this? Or is it more that you find the case for anti-realism specifically weak, and you hold some contrary position?

I am especially curious since I was under the impression that many people on LessWrong hold essentially similar views.

Thanks for the feedback. I could add wordcount. Not sure what you mean by quality rating; LW, OB, and EA Forum have their own voting/rating mechanisms but are not compatible (so putting them in a column might be confusing, although grouping by venue and looking at ratings within each venue might be interesting). Summary would be the most time-consuming to produce, and many of Carl’s posts have summaries at the top.

I recently wrote an updated timeline. It includes not just formal publications, but also blog posts and conversations. To see just the formal publications, it is possible to sort by the “Format” column in the full timeline and look at the rows with “Paper”.

I confirm that I also experience this problem, but I don’t have additional insight on the cause.

Hi Evan, did you ever write this post?

Okay I’ve created a Facebook group here: https://www.facebook.com/groups/LessWrongTokyo/

(To be sure, I don’t currently live in Tokyo, but I visit there every summer and would be very interested in attending during that time.)

I took the survey.

Some other sources of exercises you might want to check out (that have solutions and that I have used at least partly):

Multiple choice quizzes (the ones related to linear algebra are determinants, elementary matrices, inner product spaces, linear algebra, linear systems, linear transformations, matrices, and vector spaces)

Vipul Naik’s quizzes (disclosure: I am friends with Vipul and also do contract work for him)

Regarding Axler’s book (since it has been mentioned in this thread): there are several “levels” of linear algebra, and Axler’s book is at a higher level (emphasis on abstract vector spaces and coordinate-free ways of doing things) than the 3Blue1Brown videos (more concrete, working in Rn). Axler’s book also assumes that the reader has had exposure to the lower level material (e.g. he does not talk about row reduction and elementary matrices). So I’m not sure I would recommend it to someone starting out trying to learn the basics of linear algebra.

Gratuitous remarks:

I think different resources covering material in a different order and using different terminology is in some sense a feature, not a bug, because it allows one to look at the subject from different perspectives. For instance, the “done right” in Axler’s book comes from one such change in perspective.

I find that learning mathematics well takes an unintuitively long time; it might be unrealistic to expect to learn the material well unless one puts in a lot of effort.

I think there is a case to be made for the importance of struggling in learning (disclosure: I am the author of the page).