I can’t make this one. Sorry to bail at the last minute. —Paul Hobbs
phob
This is really useful; thanks! I’ve been using Anki for little over a year now, and I’ve found it very useful for classes and learning programming. I really like this application, and I’d love to see any more decks that you happen to make. I’ll definitely start my own next time I go back and read through the archives.
Yes. We just aren’t socially condemned for it.
Utilitarianism to the rescue, then.
I don’t see why they should be more valuable. From a selfish perspective, it might feel worse to lose someone you know, but from a charitable perspective, I don’t value someone merely because I am familiar with them.
You’re avoiding the question. What if a penny was automatically payed for you each time in the future to avoid dust specks floating in your eye? The question is whether the dust speck is worth at least a negative penny of disutility. For me, I would say yes.
So because there is a continuum between the right answer (lots of torture) and the wrong answer (3^^^3 horribly blinded people), you would rather blind those people?
So if someone would pay a penny, they should pick torture if it were 3^^^^3 people getting dust specks, which makes it suspect that they understood 3^^^3 in the first place.
People are being tortured, and it wouldn’t take too much money to prevent some of it. Obviously, there is already a price on torture.
A priori, as intelligent beings, we expect the universe at our scale to be immensely complex, since it produced us. I don’t view our inability to explain fully phenomena at our scale as unreasonable non-effectiveness.
Is that really a bias? The fact that they are allied or not with you is some information about what they are likely to do.
Safest, but maybe not the only safe way?
Why not make a recursively improving AI in some strongly typed language who provably can only interact with the world through printing names of stocks to buy?
How about one that can only make blueprints for star ships?
I suspect the answer is “making as much money as I possibly can”, and he’s doing much better than all of us. He can convert that to other forms of value later.
Would you pay one cent to prevent one googleplex of people from having a momentary eye irration?
Torture can be put on a money scale as well: many many countries use torture in war, but we don’t spend huge amounts of money publicizing and shaming these people (which would reduce the amount of torture in the world).
In order to maximize the benefit of spending money, you must weigh sacred against unsacred.
So you wouldn’t pay one cent to prevent 3^^^3 people from getting a dust speck in their eye?
Want to put a time scale on that?
Rational yes, if other people know of the decision. If you never find out the result of the gamble, are not held responsible and have your memory wiped, then all confounding interests are wiped except the desire for people not to die. Only then are the irrational options actually irrational.
Thank you so much for posting this! I use anki a lot, and your Mysterious Questions deck has been a great help =]
Normal flashcards should be all equally difficult: as easy as possible. The idea is to break everything down into atomic facts; this makes it so you can’t short-circuit a difficult card by just memorizing the answer; by memorizing all the parts, you still have the whole.
If you really want to drill one sub-deck, you can choose “cram mode” , and select the tag of the cards you want to review.
I don’t use anki for languages, but to learn conjugations of verbs, I would have many example sentences with a ”… ” where the verb should go. You could ask on #anki or the google group. Here’s a good article on how to make effective flashcards from the inventor of the spaced repetition algorithm, Piotr Wozniak.
Unconventional decks like having anki cards for a whole piano piece or problem in a textbook might work, but I haven’t tried them… yet. I’ll be experimenting with those this coming semester.
Thank you for this.