I am Issa Rice. https://issarice.com/

# riceissa(Issa Rice)

It’s worth noting that there is also DuckDuckGo (a search engine), which has bang expressions for outsourcing results. Just to give some of the equivalents for those listed above: ”!gi” for Google Images, ”!yt” for YouTube, ”!w” for Wikipedia, etc. To be sure, one has to rely on DuckDuckGo for adding the expressions (although I’ve had success suggesting a new expression before).

I usually ask these as questions on Quora. Quora is incredibly tolerant of even inane questions, and has the benefit of allowing others to provide feedback (in the form of answers and comments to the question). If a question has already been asked, then you will also be able to read what others have written in response and/or follow that question for future answers. Quora also has the option of anonymizing questions. I’ve found that always converting my thoughts into questions has made me very conscious of what sort of questions are interesting to ask (not that there’s anything right with that).

Another idea is to practice this with writing down dreams. After waking up, I often think “It’s not really worth writing that dream down anyway”, whereas in reality I would find it quite interesting if I came back to it later. Forcing oneself to write thoughts down even when one is not inclined to may lead to more sedulous record-keeping. (But this is just speculation.)

Gwern still links to some of muflax’s writings, using his own backups. Googling something like “site:gwern.net muflax” turns up some results (though not many).

I took the survey.

Okay I’ve created a Facebook group here: https://www.facebook.com/groups/LessWrongTokyo/

(To be sure, I don’t currently live in Tokyo, but I visit there every summer and would be very interested in attending during that time.)

Hi Evan, did you ever write this post?

I confirm that I also experience this problem, but I don’t have additional insight on the cause.

I recently wrote an updated timeline. It includes not just formal publications, but also blog posts and conversations. To see just the formal publications, it is possible to sort by the “Format” column in the full timeline and look at the rows with “Paper”.

Thanks for the feedback. I could add wordcount. Not sure what you mean by quality rating; LW, OB, and EA Forum have their own voting/rating mechanisms but are not compatible (so putting them in a column might be confusing, although grouping by venue and looking at ratings within each venue might be interesting). Summary would be the most time-consuming to produce, and many of Carl’s posts have summaries at the top.

In some recent comments over at the Effective Altruism Forum you talk about anti-realism about consciousness, saying in particular “the case for accepting anti-realism as the answer to the problem of consciousness seems pretty weak, at least as explained by Brian”. I am wondering if you could elaborate more on this. Does the case for anti-realism about consciousness seem weak because of your general uncertainty on questions like this? Or is it more that you find the case for anti-realism specifically weak, and you hold some contrary position?

I am especially curious since I was under the impression that many people on LessWrong hold essentially similar views.

Based on descriptions on the FHI website, it looks like Kyle Scott filled this role, from July 2015 to September 2017.

From the earliest snapshot of his FHI bio page:

Kyle brings over 5 years of operations experience to the Future of Humanity Instutute. He keeps daily operations running smoothly, and manages incoming and outgoing requests for Prof. Nick Bostrom.

Strategically, he works to improve the processes and capacity of the office and free up the attention and time of Prof. Nick Bostrom.

Kyle came to the Future of Humanity Institute from the Effective Altruism movement, determining that this job position would be his most effective contribution to society. Learn more about Effective Altruism here.

The page is still up but it doesn’t look like he holds the position anymore.

He seems to be a project manager at BERI now:

Kyle manages various projects supporting BERI’s partner institutions. He graduated Whitman College with a B.A. in Philosophy. He spent two years working in career services and subsequently moved to Oxford where he worked for 80,000 Hours, the Centre for Effective Altruism and most recently at the Future of Humanity Institute as Nick Bostrom’s Executive Assistant.

On November 13, 2017 FHI opened the position for applications.

ETA: Louis Francini comes to the same conclusion on Quora. (Context: I asked the question on Quora, figured out the answer, posted this comment, then Louis answered my question.)

do we have any statistics about it?

For sessions and pageviews from Google Analytics, I wrote a post about it in April 2017. Since you mention scraping, perhaps you mean something like post and comment counts; if so, I’m not aware of any statistics about that.

Wei Dai has a web service to retrieve all posts and comments of particular users that I find useful (not sure if you will find it useful for gathering statistics, but I thought I would mention it just in case).

I don’t see reference number 17 (“Personal correspondence with Carl Shulman”) used in the body of the post. What information from that reference is used in the post?

I was confused about this too, but now I think I have some idea of what’s going on.

Normally probability is defined for events, but expected value is defined for

*random variables*, not events. What is happening in this post is that we are taking the expected value of events, by way of the conditional expected value of the random variable (conditioning on the event). In symbols, if is some event in our sample space, we are saying , where is some random variable (this random variable is supposed to be clear from the context, so it doesn’t appear on the left hand side of the equation).Going back to cousin_it’s lottery example, we can formalize this as follows. The sample space can be and the probability measure is defined as and . The random variable represents the lottery, and it is defined by and .

Now we can calculate. The expected value of the lottery is:

The expected value of winning is:

The “probutility” of winning is:

So in this case, the “probutility” of winning is the same as the expected value

*of the lottery*. However, this is only the case because the situation is so simple. In particular, if was not equal to zero (while winning and losing remained exclusive events), then the two would have been different (the expected value of the lottery would have changed while the “probutility” would have remained the same).

I had a similar thought while reading this post, but I’m not sure invoking causality is necessary (having a direction still seems necessary). Just in terms of propositional logic, I would explain this post as follows:

1. Initially, one has the implication stored in one’s mind.

2. Someone asserts .

3. Now one’s mind (perhaps subconsciously) does a modus ponens, and obtains .

4. However, is an undesirable belief, so one wants to deny it.

5. Instead of rejecting the implication , one adamantly denies .

The “buckets error” is the implication , and “flinching away” is the denial of . Flinching away is about protecting one’s epistemology because denying is still better than accepting . Of course, it would be best to reject the implication , but since one can’t do this (by assumption, one makes the buckets error), it is preferable to “flinch away” from .

ETA (2019-02-01): It occurred to me that this is basically the same thing as “one man’s modus ponens is another man’s modus tollens” (see e.g. this post) but with some extra emotional connotations.

- 20 May 2019 9:12 UTC; 9 points) 's comment on “One Man’s Modus Ponens Is Another Man’s Modus Tollens” by (

I’m having trouble understanding why we can’t just fix in your proof. Then at each iteration we bisect the interval, so we wouldn’t be using the “full power” of the 1-D Sperner’s lemma (we would just be using something close to the base case).

Also if we are only given that is continuous, does it make sense to talk about the gradient?

Here is my attempt, based on Hoagy’s proof.

Let be an integer. We are given that and . Now consider the points in the interval . By 1-D Sperner’s lemma, there are an odd number of such that and (i.e. an odd number of “segments” that begin below zero and end up above zero). In particular, is an even number, so there must be at least one such number . Choose the smallest and call this number .

Now consider the sequence . Since this sequence takes values in , it is bounded, and by the Bolzano–Weierstrass theorem there must be some subsequence that converges to some number .

Consider the sequences and . We have for each . By the limit laws, as . Since is continuous, we have and as . Thus and , showing that , as desired.

My solution for #3:

Define by . We know that is continuous because and the identity map both are, and by the limit laws. Applying the intermediate value theorem (problem #2) we see that there exists such that . But this means , so we are done.

Counterexample for the open interval: consider defined by . First, we can verify that if then , so indeed maps to . To see that there is no fixed point, note that the only solution to in is , which is not in . (We can also view this graphically by plotting both and and checking that they do not intersect in .)

There are some more data (post count, comment count, vote count, etc., but not pageviews) at “History of LessWrong: Some Data Graphics”.

I am also interested in doing Japanese translations.