# mtaran

Karma: 260
• From a hacker news thread on the difficulty of finding or making food that’s fast, cheap and healthy.

“Former poet laureate of the US, Charles Simic says, the secret to happiness begins with learning how to cook.”—pfarrell

Reply: “Well, I’m sure there’s some economics laureate out there who says that the secret to efficiency begins with comparative advantage.”—Eliezer Yudkowsky

• Edit: Looks like I/​we at Harvey Mudd don’t really have a car (or person to drive it), so unless someone is going to be driving by Claremont, I don’t think I’ll be able to make it.

• Out of curiosity I did this for the first experiment (anticipating erotic images). He had 100 people in the experiment, 40 of them did 12 trials with erotic images, and 60 did 18 trials. So there were 1560 trials total.

You can get a likelihood by taking P(observed results | precognitive power is 53%)/​P(observed results | precognitive power is 50%). This ends up being (.53^827 * .47^733) /​ (.5^1560) = ~17

So if you had prior odds 1:100 against people had precognitive power of 53%, then after seeing the results of the experiment you should have posterior odds of about 1:6 against. So you can see that this by itself is not earth-shattering evidence, but it is significant.

Try doing analyses for the other experiments if you’re interested!

• I’ve reserved the Platt Conference Room (same place as the previous HMC LW meetings have been) from 2 to 8 on Sunday. Staying later than that wouldn’t be a problem, and we can either get some food from one of the cafeterias around here or order takeout from somewhere.

• The general plan for this month’s meetup is to try to get more people unfamiliar with LW and x-rationality (particularly other HMC students) to come. I’m not sure to what extent this will be successful, but if it is, it would be nice to have some introductory talks about how rationality can have good practical benefits and help you achieve your goals.

I’d encourage people who are planning on coming to have some examples from their own lives of how rationality has been particularly useful.

• I’ll be attending, since it just so happens I’m in this corner of the world right now :)

• HP:MoR 82

The two of them did not speak for a time, looking at each other; as though all they had to speak could be said only by stares, and not said in any other way.

He gives up on using his words and tries to communicate with only his eyes. Oh, how they bulge and struggle to convey unthinkable meaning!

Was there any inspiration?

• I have donated \$1000, and I really do believe that our community can get her fully funded. I understand how CI has to be cautious about these sorts of things, but I’ve seen enough evidence to be more than convinced.

• 19 Aug 2012 1:46 UTC
12 points

There are a lot of things I’d like to say, but you have put forth a prediction

It’s probably a scam

I would like to take up a bet with you on this ending up being a scam. This can be arbitrated by some prominent member of CI, Alcor, or Rudi Hoffman. I would win if an arbiter decides that the person who posted on Reddit was in fact diagnosed with cancer essentially as stated in her Reddit posts, and is in fact gathering money for a her own cryonics arrangements. If none of the proposed arbiters can vouch for the above within one month (through September 18), then you will win the bet.

What odds would you like on this, and what’s the maximum amount of money you’d put on the line?

• 19 Aug 2012 2:15 UTC
7 points

Done. \$100 from you vs \$1000 from me. If you lose, you donate it to her fund. If I lose, I can send you the money or do with it what you wish.

• 22 Aug 2012 4:41 UTC
0 points

Ok, I misread one of gwern’s replies. My original intent was to extract money from the fact that gwern gave (from my vantage point) too high a probability of this being a scam.

Under my original version of the terms, if his P(scam) was .1:

• he would expect to get \$1000 .1 of the time

• he would expect to lose \$100 .9 of the time

• yielding an expected value of \$10

Under my original version of the terms, if his P(scam) was .05:

• he would expect to get \$1000 .05 of the time

• he would expect to lose \$100 .95 of the time

• yielding an expected value of -\$45

In the second case, he would of course not want to take that bet. I’d thus like to amend my suggested conditions to have gwern only put \$52 at stake against my \$1000. For any P(scam) > .05 this is a positive expected value, so I would expect it to have been satisfactory to gwern[19 August 2012 01:53:58AM].

• The problem definition talks about clusters in the space of books, but to me it’s cleaner to look at regions of token-space, and token-sequences as trajectories through that space.

GPT is a generative model, so it can provide a probability distribution over the next token given some previous tokens. I assume that the basic model of a cluster can also provide a probability distribution over the next token.

With these two distribution generators in hand, you could generate books by multiplying the two distributions when generating each new token. This will bias the story towards the desired cluster, while still letting GPT guide the overall dynamic. Some hyperparameter tuning for weighting these two contributions will be necessary.

You could then fine-tune GPT using the generated books to break the dependency on the original model.

Seems like a fun project to try, with GPT-3, though probably even GPT-2 would give some interesting results.