Elo
I did not believe that life could be this much fun or that I could possibly achieve such a sustained level of happiness.
you didn’t explain how meetups = sustained level of happiness...
or did I miss something?
WHEN: 23 April 2014 07:30:00PM (+1100)
6:30 PM for early discussion 7PM
For those who don’t understand how the time can be both 7:30 and 6:30 that should be explained that if you live here you understand daylight savings, whereas if you do not (or are a computer) sometimes you do not.
Mega-meat-space meetups for all!
to quote http://lesswrong.com/lw/4ul/less_wrong_nyc_case_study_of_a_successful/
I did not believe that life could be this much fun or that I could possibly achieve such a sustained level of happiness.
Can vouch for the minimal abuse of the system. Maybe it was only having 25 exclusive people on camp, or maybe no one was ready to abuse the system. I would do stickers the same again (and advise others to do try it out too).
Software design: if you are using a logic test, check on either side of the logic test, and also random answers.
is X > 5? if X is: 4: no 5: no 6: yes 5.00001: no 5.999999: yes −1: error error error “tomato”: error error error
taught me to always double check the hypothesis is not just a good fit, but a good enough fit for the purpose. If you never encounter a tomato, or decimals or negative numbers, then the test works fine. if you expect occasional tomatoes, and your test is looking for a positive integer. Maybe its time for a new test.
What about when choice inflicts the problem of the multi-armed bandit on us: multi-armed bandit on wikipedia
Where with more options you need to explore them (some amount) to avoid missing out on rewards. Where you might not always know if Y is lesser than X, even when being told specifically that Y < X.
which is to say that: someone who behaves with applied rationality should be occasionally exploring choices to avoid missing rewards. Because of that, when spare choices come up—they create a burden of exploration on the party and that exploration is taxing on resources (even when not chosen).
Isn’t that a clearer description for why extra choice can be harmful?
Does this just mean that marginal utility is non-linear at the minima and maxima?
while the change from zero control over a supply chain in any given significantly complicated product (i.e. a computer); up to a fractional control may impart an initial high utility (i.e. I make all the mice—everyone needs to come to me for their mice); The following utility if you were to gather more control (i.e. I also make all the keyboards—everyone also needs to come to me for the keyboards) is a lot less of a utility increase. as is screen, motherboards, ram, and N pieces required to create a computer, up until the last several where control of the final pieces will give you the status of computer-master-overlord. like none before you...
come to think of it; resources when they are below a threshold for high-level production automation. for example wool. one sheep may produce between 5-10kg of wool. in the hands of any single person the value of the wool is of a certain low-level utility, but as one person amasses enough of the resource to allow a production-line to make use of the wool the utility increases and we can get yarn and socks in an efficiency that no small amount of the resource could.
where 1kg of coal will provide little utility to anyone but santa, having enough coal to run a power-station is a quite high utility in comparison to making many sad children...
start a facebook group to generate interest. it seems to be the way to go. maybe you can attract people travelling through Tokyo.
connect with lesswrong sydney on facebook please. Would like to talk to you about things.
I would be interested. been going through thoughts of values of 100$ over time myself.
Its easy to say “not shower” and “thats gotta be terrible”, but its (potentially hilarious) to have a guy running around town insisting people smell him because he smells great! and hasn’t showered in years! And demonstrating the success of the results; rather than the ickyness of no-showers.
goo diets is very strawmanned. Steelmanned is that hungover guy not sure what to eat—just eat soylent. That grumpy “I can’t decide what to eat” feeling and a person being angry and someone else going—just eat some soylent! and then everyone going out to dinner and talking about how delicious everything tastes.
radical truth can be painful or helpful depending on where/when it happens. you just need to get woken up by radical truth guy at 2am doing something annoying and have him yell at radical truth guy for being annoying, and then have radical truth guy stop and say, “this is so great that you are taking it on board but next time try less yelling”
I like those suggestions too...
COZE is a good inclusion… People going out and doing un-fun things and then later being all like, “I tried a rave once, and I don’t usually like it. How about we go to pizza instead”
Have a coffee table “the lookup table” so that you can look things up and talk about learning about them when you discover new things. Or some way to put arguments on hold till you look them up...
I dont see why not… it just seems to be a variant on Soylent. I wonder if we could get funding via product placement...
I want to create a variant on that for rationality topics; not “giant sci-fi nerds living together”; but “giant rationality geniuses” living together.
and I want it to show rationality being a win-state not a laugh-at-me-state.
Big bang had an alternative purpose. They did well. And maybe its time we stepped up to the plate...
I imagine the rest of the world outside the house to be and bearing in mind that no one is perfectly rational; and most people you find will be still on their journey of “getting better”. I imagined my characters to only be “most of the way there”.
They may appear to be incongruently on their way to rationality (aka—better in some areas than others). But such is the nature of the journey. For we are not all naturally born saints.
There is a lot of rationality to try to have it all there at once. It would have to be an imaginary “inquiry” process where a particular notion or two will be focussed on at any given episode. This episode focusses on the errors caused by a lack of bayesian reasoning over simple tasks, and further the failures of having too much of it… And eventually the advantages of bayesian thinking win out over the disadvantages.
Take: trigger-action person. (they rely on verbal triggers to do a whole bunch of actions) i.e. pushups, smiles. And have them get into a fight over their trigger action (oh no—sometimes the world is just tricky to navigate). But also have them succeed at whatever trigger action they were trying to complete. Make the win-states outweigh the losses...
Journey to the win-states? With bad jokes along the way?
Do you have/know someone with experience in scriptwriting?
Factors not within a human’s control: age, gender, intelligence, money. genetics.
Factors that are within a human’s control: parenthood, health, social activity, religiosity relationship satisfaction, work satisfaction.
factors that are sort of within human control: Parenthood Physical attractiveness Love
By this thinking—a lot of what creates happiness is within our control (or probable control).