I donated $40,000.00
pengvado
I donated 10000$.
- How much is karma worth, after all? by 27 Aug 2011 21:23 UTC; 10 points) (
- 27 Aug 2011 0:06 UTC; 0 points) 's comment on Help Fund Lukeprog at SIAI by (
Indeed, though it doesn’t have to be a time loop, just a logical dependency. Your expected payoff is α[p^2+4(1-p)p] + (1-α)[p+4(1-p)]. Since you will make the same decision both times, the only coherent state is α=1/(p+1). Thus expected payoff is (8p-6p^2)/(p+1), whose maximum is at about p=0.53. What went wrong this time? Well, while this is what you should use to answer bets about your payoff (assuming such bets are offered independently at every intersection), it is not the quantity you should maximize: it double counts the path where you visit both X and Y, which involves two instances of the decision but pays off only once.
You can never see what changes other people have made since your last commit, because to get the changes, you have to do an update
svn diff -rBASE:HEAD
to see the changes since your last update.svn diff -rHEAD
to diff your working tree against the repository.
Which does send the diffs over the web, and is inconveniently slow.(I’m not a svn user. I just agreed with the initial reaction of “that’s ridiculous”, and followed it up with “I bet there really is a way to do that” and looked at the manpage.)
I value my free time far too much to work for a living. So your model is correct on that count. I had planned to be mostly unemployed with occasional freelance programming jobs, and generally keep costs down.
But then a couple years ago my hobby accidentally turned into a business, and it’s doing well. “Accidentally” because it started with companies contacting me and saying “We know you’re giving it away for free, but free isn’t good enough for us. We want to buy a bunch of copies.” And because my co-founder took charge of the negotiations and other non-programming bits, so it still feels like a hobby to me.
Both my non-motivation to work and my willingness to donate a large fraction of my income have a common cause, namely thinking of money in far-mode, i.e. not alieving The Unit of Caring on either side of the scale.
- 9 Dec 2012 11:50 UTC; 1 point) 's comment on 2012 Winter Fundraiser for the Singularity Institute by (
I’m not sure what “arbitrary” means here. You don’t seem to be using it in the sense that all preferences are arbitary.
a story in which a logician tries to argue me down a slippery slope of moral nihilism
If the nihilist makes a sufficiently circuitous argument, they can ensure that there’s no step you can point to that’s very wrong. But by doing so, they will make slight approximations in many places. Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you’ll find that they’ve approximated away any correlation with the premises. You don’t need to avoid following the argument too far, if you appropriately increase your error bars at each step.
In short: “similar” is not a transitive relation.
Why should you not have preferences about something just because you can’t observe it? Do you also not care whether an intergalactic colony-ship survives its journey, if the colony will be beyond the cosmological horizon?
Well, we don’t want to build conscious AIs, so of course we don’t want them to use anthropic reasoning.
Why is anthropic reasoning related to consciousness at all? Couldn’t any kind of Bayesian reasoning system update on the observation of its own existence (assuming such updates are a good idea in the first place)?
you’re too young (and didn’t have much income before anyway) to have significant savings.
Err, I haven’t yet earned as much from the lazy entrepreneur route as I would have if I had taken a standard programming job for the past 7 years (though I’ll pass that point within a few months at the current rate). So don’t go blaming my cohort’s age if they haven’t saved and/or donated as much as me. I’m with Rain in spluttering at how people can have an income and not have money.
In fact, the question itself seems superficially similar to the halting problem, where “running off the rails” is the analogue for “halting”
If you want to draw an analogy to halting, then what that analogy actually says is: There are lots of programs that provably halt, and lots that provably don’t halt, and lots that aren’t provable either way. The impossibility of the halting problem is irrelevant, because we don’t need a fully general classifier that works for every possible program. We only need to find a single program that provably has behavior X (for some well-chosen value of X).
If you’re postulating that there are some possible friendly behaviors, and some possible programs with those behaviors, but that they’re all in the unprovable category, then you’re postulating that friendliness is dissimilar to the halting problem in that respect.
Draw a gradient rather than a line. You don’t need sharp boundaries between categories if the output of your judgment is quantitative rather than boolean. You can assign similar values to similar cases, and dissimilar values to dissimilar cases.
See also The Fallacy of Gray. Now you’re obviously not falling for the one-color view, but that post also talks about what to do instead of staying with black-and-white.
It takes O(n) memory units just to store a list of size n. Why should computers have asymptotically more memory units than processing units? You don’t get to assume an infinitely parallel computer, but O(n)-parallel is only reasonable.
My first impression of the paper is: We can already do this, it’s called an FPGA, and the reason we don’t use them everywhere is that they’re hard to program for.
“Counterfactuals.” Fourth thing on the bulleted list, straight outta Kant.
Any talk about consequences has to involve some counterfactual. Saying “outcome Y was a consequence of act X” is an assertion about the counterfactual worlds in which X isn’t chosen, as well as those where it is. So if you construct your counterfactuals using something other than causal decision theory, and you choose an act (now) based on its consequences (in the past), is that another overlap between consequentialism and deontology?
Good point. So do the native speakers of such languages not make this mistake?
You have some reason to believe that Klinefelte’s syndrome (XXY) is less common among xkcd readers than among the general population?
Anecdote: I read sci-fi as a kid, learned of the concept of cryonics, thought it was a good idea if it worked… and then it never occurred to me to research whether it was a real technology. Surely I would have heard of it if it was?
Then years later I ran into a mention on OvercomingBias and signed up pretty much immediately.
The US government made Tor? Awesome. I wonder which part of the government did it.
There’s nothing inherently wrong with data dredging. Considering all possible hypotheses and keeping the ones suggested by the data is just Solomonoff induction. It only becomes problematic if you don’t have a consistent prior, e.g. if you keep the hypothesis with the greatest likelihood ratio rather than the greatest posterior.
Hypothesis-driven has its place in the human practice of science, because humans have a hard time computing a prior after having seen the data. But that’s a problem with the humans, not with the math.
I donated 20,000$ now, in addition to 110,000$ earlier this year.