You’re welcome! I’d like hearing a bit about how it helped, if you are ok with sharing.
Pablo Repetto
[Question] Could Patent-Trolling delay AI timelines?
Notes from a conversation with Ing. Agr. Adriana Balzarini
Summary: “How to Read Books and Write Essays” by OSP’s Blue
Summary: “How to Do Research” by OSP’s Red
Hi! I wrote a summary with some of my thoughts in this post as part of an ongoing effort to stop sucking at researching stuff. This article was a big help, thank you!
Summary: “How to Write Quickly...” by John Wentworth
I’m glad you enjoyed it! I agree that more should be done. Just listing the specific search advice on the new table of contents would help a lot.
I’m gonna do the work, I promise. I’m just working up the nerve. Saying, in effect, “this experienced professional should have done his work better, let me show you how” is scary as balls.
Summary: “Internet Search tips” by Gwern Branwen
First of all: thank you for setting up the problem, I had lots of fun!
This one reminded me a lot of D&D.Sci 1, in that the main difficulty I encountered was the curse of dimensionality. The space had lots of dimensions so I was data-starved when considering complex hypotheses (performance of individual decks, for instance). Contrast with Voyages of the Grey Swan, where the main difficulty is that broad chunks of the data are explicitly censored.
I also noticed that I’m getting less out of active competitions than I was from the archived posts. I’m so concerned with trying to win that I don’t write about and share my process, which I believe is a big mistake. Carefully composed posts have helped me get my ideas in order, and I think they were far more interesting to observers. So I’ll step back from active competitions for a bit. I’ll probably do the research summaries I promised, “Monster Carcass Auction”, “Earwax” (maybe?), then come back to active competitions.
Thank you for doing the work of correcting this usage; precision in language matters.
I made some progress (right in the nick of time) by...
Massaging the data into a table of every deck we’ve seen, and whether the deck won its match or lost it (the code is long and boring, so I’m skipping it here), then building the following machinery to quickly analyze restricted subsets of deck-space.
q = "1 <= dragon <= 6 and 1 <= lotus <= 6" display(decks.query(q).corr()["win"].drop("win").sort_values(ascending=False).plot.bar()) decks.query(q)["win"].agg(["mean", "sum", "count"])
q is used to filter us down to decks that obey the constraint. We then check the correlation of each card to winrate. Finally, we show how many decks were kept, and what the winrate actually is.
q can be pretty complicated, with expressiveness limits defined by
pd.DataFrame.query
. A few things that work:(angel + lotus) == 0
1 <= dragon and 1 <= lotus and 4 <= (dragon + lotus)
1 <= dragon and lotus == 0
(pirate-1) <= sword <= (pirate+1)
My deck submission (PvE and PvP) is:
4 angels 3 lotuses 3 pirates 2 sword
See response to Ben Pace for counterpoints.
My counterpoints, in broad order of importance:
If you lie to people, they should trust you less. Observing you lying should reduce their confidence in your statements. However, there is nothing in the fundamental rules of the universe that say that people notice when they are deceived, even after the fact, or that they will trust you less by any amount. Believing that lying, or even being caught lying, will result in total collapse of confidence without further justification is falling for the just-world fallacy.
If you saw a man lying to his child about the death of the family dog, you wouldn’t (hopefully) immediately refuse to ever have business dealings with such a deceptive, amoral individual. Believing that all lies are equivalent, or that lie frequency does not matter, is to fall for the fallacy of grey.
“Unethical” and “deceptive” are different. See hpmor ch51 for hpmor!Harry agreeing to lie for moral reasons. See also counterarguments to Kant’s Categorical Imperative (lying is always wrong, literally never lie).
The point about information theory stands.
Note that “importance” can be broadly construed as “relevance to the practical question of lying to actual people in real life”. This is why the information-theoretic argument ranks so low.
If good people were liars, that would render the words of good people meaningless as information-theoretic signals, and destroy the ability for good people to coordinate with others or among themselves.
My mental Harry is making a noise. It goes something like Pfwah! Interrogating him a bit more, he seems to think that this argument is a gross mischaracterization of the claims of information theory. If you mostly tell the truth, and people can tell this is the case, then your words convey information in the information-theoretic sense.
EDIT: Now I’m thinking about how to characterize “information” in problems where one agent is trying to deceive another. If A successfully deceives B, what is the “information gain” for B? He thinks he knows more about the world; does this mean that information gain cannot be measured from the inside?
Hallelujah
Followed up on this idea and noticed that
A table of winrate as function of number of “evil” cards and “item” cards shows that item cards only benefit evil decks. I considered dragon, emperor, hooligan, minotaur, and pirate to be evil.
“No items, all good” wins 55.6%
“4 items, no good” wins 58.4%
“4+ items, all good” peaks at 37.0% winrate, and drops when adding items
Sorry in advance for an entirely too serious comment to a lighthearted post; it made me have thoughts I thought worth sharing. The whole “Karma convertibility” system is funny, but the irony feels slightly off. Society (vague term alert!) does in fact reward popular content with money. Goodhart’s law is not “monetizing an economy instantly crashes it”. My objections to Karma convertibility, are:
Exploitability. Put in enough people on a system, and some of them will try to break it. Put in even more people, and they will likely succeed.
Inefficiency. There is a big upfront cost to generating good ideas. Ie: you’ve got to invest putting interesting things inside your mind before you can make up your own. Specialization pays off just as much in intellectual labor as in other areas.
Pareto’s law. A small, selected group can be a consistent performance outlier.
Marginal returns on money. The marginal value of money goes down dramatically when you get above the subsistence threshold.
I think that providing a salaried position to a small number of intrinsically motivated individuals is a much more efficient way of buying ideas. I think RAND is basically structured this way?
Epistemic status: Don’t quote me on any of this. I’ve done no research, instead I’m pattern-matching from stuff that I’ve passively absorbed.
I see… so trolling by patenting something akin to convolutional neural networks wouldn’t work because you can’t tell what’s powering a service unless the company building it tells you.
Maybe something on the lines of “service that does automatic text translation” or “car that drives itself” (obviously not these, since a patent with so much prior art would never get granted) would be a thing that you could fight over?