Random changes can be useful. Human minds are not good at being creative and exploring solution space. They can’t give “random” numbers, and will tend to round ideas they have towards the nearest cached pattern. The occasional jolt of randomness can lead to unexplored sections of solution space.
ThrustVectoring
It’s been stuck, but I haven’t barely been putting effort into it. I’ve been working much more on minimizing mouse usage—vim for text editing, firefox with pentadactyl for web browsing, and bash for many computing and programming tasks.
The low-hanging fruit is definitely not in getting better at stenographic typing—since I’ve started working as a professional software developer, there’s been much more computer-operation than English text entry. I’d have to figure out a really solid way of switching seamlessly between Vim’s normal-mode and stenographic typing in insert mode. And configuration and exploratory learning that I’m nowhere near capable of to adjust stenographic typing to writing code in addition to English. It’s likely still my best option for getting super solid at writing English text, but it’s simply lower priority at the moment than other tools.
Daydreaming is nice.
Because I can’t talk about what makes it awesome without spoiling it, and I forgot that rot13 is a thing.
Warning: massive spoilers below
Fpvba, gur ynfg yvivat tbqyvxr nyvra erfcbafvoyr sbe cnenuhzna cbjref, vf svtugvat Rvqbyba naq Tynvfgnt Hynvar. Rvqbyba vf bar bs gur zbfg cbjreshy pncrf, n uvtu yriry Gehzc—uvf cbjre tvirf uvz gur guerr cbjref gung ur arrqf. Uvf cbjre jnf jrnxravat bire gvzr, naq ur erpragyl svkrq vg, naq vf gnxvat gur bssrafvir gb Fpvba.
Sbe onpxtebhaq, gurer unir orra n frevrf bs pvgl-qrfgeblvat zbafgref pnyyrq “Raqoevatref”. Gurl fubjrq hc nsgre Rvqbyba chg Tynvfgnt Hynvar vagb gur Oveqpntr, n fhcrecevfba sbe cnenuhznaf. Gurl’ir xvyyrq pbhagyrff pncrf naq jerpxrq n gba bs guvatf—Yrivnguna pbagebyf jngre naq fnax Xlhfuh, naq fpnevarff yriryf tb hc sebz gurer.
Abj, Rvqbyba unf fgnegrq npghnyyl cerffhevat Fpvba fbzr, fb Fpvba qrpvqrf gb hfr na rkcrafvir cbjre—gur novyvgl gb svther bhg jung ur arrqf gb qb va beqre gb jva. Vg gheaf bhg gung gur npgvba vf gb fgbc naq fnl sbhe jbeqf gb hggreyl gnxr gur svtug bhg bs Rvqbyba, naq gura oynfg uvz nf ur cebprffrf vg. Naq bire gur ynfg guvegl lrnef, Fpvba unf fnvq V guvax 2 jbeqf gbgny, znlor bar.
“Lbh arrqrq jbegul bccbaragf”. Nyy gur crbcyr gung qvrq, nyy gur fnpevsvprf lbh naq lbhe sevraqf znqr—nyy gung unccrarq orpnhfr lbh arrqrq gb cebir lbhefrys, lbh arrqrq fbzrguvat gb svtug ntnvafg, fbzrguvat gb tvir lbh checbfr. Nyy orpnhfr lbh pbhyqa’g qrny jvgu gur checbfryrffarff bs abg univat fbzrguvat gb svtug ntnvafg. Naq fb, lbh tbg gur Raqoevatref—Orurzbgu, Yrivnguna, gur Fvzhetu, Xubafh, Obuh naq Gbuh.
Naq vs lbh qba’g ernq gur pbzzragf, vg’f rnfl gb zvff bhg ba ubj Rvqbyba gnxrf gubfr jbeqf.
There’s a four-word chapter in worm. If you read one chapter’s comment pages, read that one’s.
Deciding to play slot machines is not a choice people make because they think it will net them money, it’s a choice they make because they think it will be fun.
Update: I’m at pretty much the same place now as I was then. Dropped the keto diet since I was happy with where I was. Still fairly active but not hardcore about it.
They’d be better off using a shared algorithm if involved in a situation with cars reasoning in a similar fashion.
Plover is another option. I spent a month or so learning it and got to about 50 WPM, while those with a lot more practice can get 200 WPM. It’s on hold indefinitely, though.
“Control” in general is not particularly well defined as a yes/no proposition. You can likely rigorously define an agent’s control of a resource by finding the expected states of that resource, given various decisions made by the agent.
That kind of definition works for measuring how much control you have over your own body—given that you decide to raise your hand, how likely are you to raise your hand, compared to deciding not to raise your hand. Invalids and inmates have much less control of their body, which is pretty much what you’d expect out of a reasonable definition of control over resources.
This is still a very hand-wavy definition, but I hope it helps.
I’m a current student who started two weeks ago on Monday. I’d be happy to talk as well.
Dollars already have value. You need to give them to the US government in order to produce valuable goods and services inside the United States. That’s all there is to it, really—if someone wants to make #{product} in a US plant, they now owe US dollars to the government, which they need to acquire by selling #{product}. So if you have US dollars, you can buy things like #{product}.
That’s the concise answer.
The real danger of the “win-more” concept is that it’s only barely different than making choices that turn an advantage into a win. You’re often put in a place where you’re somehow ahead, but your opponent has ways to get back in the game. They don’t have them yet—you wouldn’t be winning if they did—but the longer you give them the more time they have.
For a personal example from a couple years ago, playing Magic in the Legacy format, I once went up against a Reanimator deck with my mono-blue control deck. The start was fairly typical—Reanimator trying to resolve a gigantic threat to win, while I played many counterspells and hit him with some Vendilion Clique beats. My opponent ended up getting an Iona out (naming blue, obviously), but went down to exactly one life to do so. This was very, very awkward for him, since he couldn’t attack with the Iona, activate fetchlands, or use the alternate cost of Force of Will. But, I had outs—Powder Keg (7 copies of keg/ratchet bomb) and waiting 9 turns, or Vedalken Shackles (3 copies). So I stayed in, and got as many draw phases as I could, and lucked out with a Shackles topdeck, followed by being able to play blue spells and winning the game.
Anyhow, my point is that cards that help you only when you’re winning can turn wins into losses. Your opponents can have outs, and it’s a good idea to take those outs away. If you don’t, then sometimes your opponent will pull exactly what they need to do something ridiculous—say, dealing with a card that keeps them from playing 28 of their 38 spells, and seven of the ten spells they can play take 9 turns to do anything about it.
“Win-more” is definitely the wrong word to describe this concept. I think a better choice is calling it a “close-out” or “finishing” card. The point of these is to make sure that you win when you have an advantage. It also tells you that you don’t want too many of these—many decks run just one or two copies. Dredge, for instance, runs a single Flayer to turn having their deck in their graveyard into a win. My mono-blue control deck ran two Sphinx of Jwar Isle (there were essentially zero answers for him in the meta, and I’ve stolen games with him. That said, one copy would be an Aetherling if it was printed at the time).
Replacing a card with a finisher means that you’ll take fewer leads, but win more games while ahead. Sometimes the right number of finishers is one - when Dredge has a lead, it’s got access to all or most of the cards in their deck. Sometimes it’s more—my mono-blue deck would run between 2 and 6, depending on how I felt about Jace at the time. Often it’s zero, and your game plan is to win with the cards that got you ahead in the first place.
I read a comment in this thread by Armok_GoB, and it reminded me of some machine-learning angles you could take on this problem. Forgive me if I make a fool of myself on this, I’m fairly rusty. Here’s my first guess as to how I’d solve the following:
open problem: the tradeoff of searching for an exact solution versus having a good approximation
Take a bunch of proven statements, and look at half of them. Generate a bunch of possible heuristics, and score them based on how well they predict the other half of the proven statements given the first half as proven. Keep track of how long it takes to apply a heuristic. Use the weighted combination of heuristics that worked best on known data, given various run-time constraints.
With a table of heuristic combinations and their historical effectiveness and computational time, and the expected value of having accurate information, you can quickly compute the expected value of running the heuristics. Then compare it against the expected computation time to see if it’s worth running.
Finally, you can update the heuristics themselves whenever you decide to add more proofs. You can also check short run-time heuristics with longer run-time ones. Things that work better than you expected, you should expect to work better.
Oh, and the value-of-information calculation I mentioned earlier can be used to pick up some cheap computational cycles as well—if it turns out that whether or not the billionth digit of pi is “3” is worth $3.50, you can simply decide to not care about that question.
And to be rigorous, here are the hand-waving parts of this plan:
Generate heuristics. How? I mean, you could simply write every program that takes a list of proofs, starting at the simplest, and start checking them. That seems very inefficient, though. There may be machine learning techniques for this that I simply have not been exposed to.
Given a list of heuristics, how do you determine how well they work? I’m pretty sure this is a known-solved problem, but I can’t remember the exact solution. If I remember right it’s something along the lines of log-difference, where getting something wrong is worth more penalty points the more certain you are about it.
Given a list heuristics, how do you find the best weighted combinations under a run-time constraint? This is a gigantic mess of linear algebra.
And another problem with it that I just found is that there’s no room for meta-heuristics. If the proofs come in two distinguishable groups that are separately amenable to two different heuristics, then it’s a really good idea to separate out these two groups and applying the better approach for that group. My approach seems like it’d be likely to miss this sort of insight.
Yeah, it wasn’t there when I posted the above. The “donate to the top charity on GiveWell” plan is a very good example of what I was talking about.
There are timeless decision theory and coordination-without-communication issues that make diversifying your charitable contributions worthwhile.
In short, you’re not just allocating your money when you make a contribution, but you’re also choosing which strategy to use for everyone who’s thinking sufficiently like you are. If the optimal overall distribution is a mix of funding different charities (say, because any specific charity has only so much low-hanging fruit that it can access), then the optimal personal donation can be mixed.
You can model this by a function that maps your charitable giving to society’s charitable giving after you make your choice, but it’s not at all clear what this function should look like. It’s not simply tacking on your contribution, since your choice isn’t made in a vacuum.
There is a huge amount of risk involved in retiring early. You’re essentially betting that you aren’t going to find any fun, useful, enjoyable, or otherwise worthwhile uses of money. You’re betting that whatever resources you have at retirement are going to be enough, at a ratio of whatever your current earning power is to your expected earning power after the retirement decision.
Standard beliefs are only more likely to be correct when the cause of their standard-ness is causally linked to its correctness.
That takes care of things like, say, pro-American patriotism and pro-Christian religious fervor. Specifically, these ideas are standard not because contrary views are wrong, but because expressing contrary views makes you lose status in the eyes of a powerful in-group. Furthermore, it does not exclude beliefs like “classical physics is an almost entirely accurate description of the world at a macro scale”—inaccurate models would contradict observations of the world and get replaced with more accurate ones.
Granted, standard opinions often are standard because they are right. But, the more you can separate out the standard beliefs into ones with stronger and weaker links to correctness, the more this effect shows up in the former and not the latter.
To determine whether my view is contrarian, I ask whether there’s a fairly obvious, relatively trustworthy expert population on the issue.
I think that’s on the same page as my initial thoughts on the matter. At least, it is a useful heuristic that applies more to correct standard beliefs than incorrect ones.
Taking source code from a boxed AI and using it elsewhere is equivalent to partially letting it out of the box—especially if how the AI works is not particularly well understood.
Having it publicly available definitely has huge costs and tradeoffs. This is particularly true when you’re worried about the processes you want to encourage getting stuck as a fixed doctrine—this is essentially why John Boyd preferred presentations over manuals when running his reform movement in the US military.