That may have, in fact, been the point. I doubt many people bothered to check.
lalaithion
I can’t wait to see the Cooperate/Defect ratio. I, for one, chose to cooperate.
″ ‘striving for the impossible’ doesn’t mean ‘toiling in vain’. It means growth, it means improvement in the directions of your ideas, not futility.”
I’m getting an error when trying to access the files. “Something went wrong. Don’t worry, your files are still safe and the Dropboxers have been notified. Check out our Help Center and forums for help, or head back to home.”
Wish I could come, but I’ve got a class until 6 and I live in Boulder. But I’m commenting here to express my interest in future meetups in this general area.
*lalaithion—Too many thoughts, too little time
It would be easy to construct situations where historians could have opportunities to make and test hypotheses. Just find a section of history they don’t know anything about, and give them a summary of 99 years, and ask them to predict what happens in the 100th. Or give them a summary of a couple years and ask them to fill in more complex details. Or give them descriptions of what happened on either side of a year, and ask them to figure out what happens during that year. Then see if they predict accurate things.
Did it! I’m shocked that my digit ratio is so high. Like, I figured that it was pretty high, being a bisexual genderfluid “man” (assigned at birth, that is), but I didn’t expect it to be greater than 1. Also, it was much shorter than I expected.
For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn’t changed anything about them. Similarly, becoming friends with someone doesn’t usually change the person that much, but increases how much I care about them an awful lot.
Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn’t
Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.
I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you’re aware that you’re being inconsistent, you can still alter your actions to try and correct for that fact.
I honestly don’t understand whether this is criticising Matt Taylor or criticising Taylor’s critics.
I agree. I only know the name ’cause I clicked through the links. Like, okay, maybe the ESA should hire someone who will say “don’t wear that shirt over in front of the cameras to give the interview.” But it really isn’t a big deal
I think that, while it is indeed possible for asexuality to arise that way, most evidence seems to point away from that conclusion....
If this is a joke, I love it.
If this isn’t a joke, it’s probably just a typo.
Metabeleifs! Applied math concepts that seem useless now, have, in the past, become useful. Therefore, the belief that “believing in applied math concepts pays rent in experience” pays rent in experience, so therefore you should believe it.
This is an excellent quote… I had to write an essay last semester for one of my classes on how I would design my preferred interface, and I basically wrote my entire essay using this quote.
I took a fairly black-box approach to this problem. Basically, we want a function f(str, dex, con, int, wis, cha) which outputs a chance of success, and then we want to optimize our selection so that we have the highest chance. The optimization part is easy because it’s discrete; once we have a function, we can simply evaluate it at all of the possible inputs and select the best one.
I used a number of different ML models to estimate f, and I got pretty consistent brier scores on reserved test data of ~0.2, which isn’t great, but isn’t awful. I used scikit-learn, and used a MLPClassifier, LogisticRegression, GaussianNB, and RandomForestClassifier, along with CalibratedClassifierCV so that they had calibrated probability scores. Most of them I left on their defaults, but I played around with the layers in the MLPClassifier until it had a pretty good brier score.
Despite the fact that these models all had similar brier scores, they had surprisingly different recommendations. The Neural Net wanted to give small bumps to strength, wisdom, and charisma. Logistic Regression wanted to go all-in on wisdom, and putting any remaining points into charisma. Gaussian Naive Bayes wanted to put most of the points into charisma, but oddly, not all; it wanted to also sprinkle a few points into wisdom. The Random Forest Classifier wanted to bring strength and charisma up a little, but mostly sink points into wisdom, and occasionally scatter points into constitution or intelligence.
The top recommendation for each method is as follows:
Neural Net: 8, 14, 13, 13, 15, 9
Logistic Regression: 6, 14, 13, 13, 20, 6
Naive Bayes: 6, 14, 13, 13, 14, 12
Random Forest: 8, 14, 13, 13, 15, 9
Well, I think that the Neural Net and Decision Forest I used in the last post both saw pretty much what you were going for; with the exception that they both put one too many points into CHA, bumping it up to 9, instead of into WIS.
All in all, a success for throwing lots of data into an ML model you don’t fully understand and walking away… except that I had two other models which performed abysmally.
Hmm, I disagree with the “one intuition” way of looking at finances. Yes, you can’t drop your expenses by more than 100%, and you can increase your income by more than 100%, but what you really care about is increasing the ratio of income to expenses. In this context, halving your expenses is equivalent to doubling your salary, and if you drop your expenses to zero, that’s equivalent to increasing your income to infinity.
My name is Izaak. I stumbled across HPMOR one weekend while staying in a hotel room. I didn’t sleep that night. I’ve read through most of Less Wrong, and some of the stuff on the other sites like Overcoming Bias. I’m a high school senior who will probably major in Comp Sci in college.
I’ve found the stuff on this website truly useful, but I have a question; I am currently in the IB Diploma Programme, and they have this class called TOK (Theory of Knowledge, it’s truly awful, it has very little actual epistemology), but I have to do a final presentation on a topic of my choice, and I was wondering if someone here who knows about the Diploma Programme could brainstorm some ideas about where to focus for a 20 minute presentation about (some subset of) rationality?