Maybe_a

Karma: 22
• Argument against CEV seems cool, thanks for formulating it. I guess we are leaving some utility on the table with any particular approach.

Part on referring to a model to adjudicate itself seems really off. I have a hard time imagining a thing that has better performance at meta-level than on object-level. Do you have some concrete example?

• Maybe people failure is caused by whatever they tweaked to avoid ‘generating realistic faces and known persons’?

• Cool analysis. Sounds plausible.

So you’re out to create a new benchmark? Reading SAT is referencing text in answers with ellipsis, making it hard for me to solve in single read-through. Maybe repeating questions in the beginning and expanding ellipses would fix that for humans. Probably current format is also confusing for pretrained models like GPT.

Requiring a longer task text doesn’t seem essential. In the end, maybe, you’d like to take some curriculum learning experiment and thin out learning examples so that current memorization mechanisms wouldn’t suffice? Admittedly I don’t know much about that field.

Area of neural networks in search looks like a half of a simple long-term memory: just retrieval, but may have some useful ideas. Using an existing tool like recoll to search through the corpus doesn’t work because you can’t back-propagate through it. This lack of compositionality is always bothersome.

• No particular philosophy: just add some kludge to make your life easier, then repeat until they blot out the Sun.

Non-computer tool is paper for notes & pen, filing everything useful to inbox during daily review. Everything else is based off org-mode, with Orgzly on mobile. Syncing over SFTP, not a cloud person.

Wrote an RSS reader in Python for filling inbox, along with org-capture. Wouldn’t recommend the same approach, since elfeed should do the same reasonably easy. Having a script helps since running it automatically nightly + before daily review fills up inbox enough novel stuff to motivate going through it, and avoid binging on other sites.

Other than inbox have a project list & calendar within emacs. Not maintaining a good discipline for weekly/​monthly reviews, but much smoother than keeping it in your head.

I have a log file that org-mode keeps in order by date. And references file that don’t get very organized or used often. Soon will try to link contents of my massive folder of PDFs with it.

• Consider not wasting your reader’s time with having to register on grasple to be presented with 34-euro paywall.

• I’d think ‘ethical’ in review board has noting to do with ethics. It’s more of PR-vary review board. Limiting science to status-quo-bordering questions doesn’t seem most efficient, but a reasonable safety precaution. However, typical view of the board might be skewed from real estimates of safety. For example, genetic modification of humans is probably minimally disruptive biological research (compared, to, say, biological weapons), though it is considered controversial.

• My town …

Let 20% wards be swung by one vote, that gives each voter 1 in (5 * amount of voters) chance of affecting a vote cast on the next level, if that’s how US system works?

… elected officials change their behavior based on margins …

Which is an exercise in reinforcing prior beliefs, since margins are obviously insufficient data.

Politicians pay a lot more attention to vote-giving populations...

Are politicians equipped with a device to detect voters and their needs? If not, then it’s lobbying, not voting that matters.

Population following my reasoning: me.

P.S. Thanks for hinting at other question, which might be of actual use to me.

• Absolutely, shutting up and multiplying is the right thing to do.

Assume: simple majority vote, 1001 voters, 1 000 000 QALY at stake, votes binomially distributed B(p=0.4), no messing with other people’s votes, voting itself doesn’t give you QALY.

My vote swings iff 500 ⇐ B(1001, 0.4) < 501, with probability 5.16e-11, it is advised if takes less than 27 minutes.

Realistically, usefulness of voting is far less, due to:

• actual populations are huge, and with them chance of swing-voting falls;

• QALY are not quite utils (eg. other’s QALY counts the same way as your own);

• You will rarely see such huge rewards (if 1 QALY ~ 50 000$, our scenario gave each voter free$50 M )

So, people who ‘need your vote’ in real-world scenarios are either liars or just hopeless.

• I don’t care, because there’s nothing I can do about it. It also applies to all large-scale problems, like national elections.

I do understand, that that point of view creates ‘tragedy of commons’, but there’s no way I can force millions of people to do my bidding on this or that.

I also do not make interventions to my lifestyle, since I expect AGW effects to be dominated by socio-economic changes in the nearest half a century.

• Are artificial neural networks really Turing-complete? Yep, they are [Siegelman, Sontag 91]. Amount of neurons in the paper is

, with rational edge weights, so it’s really Kolmogorov-complex. This, however, doesn’t say if we can build good machines for specific purposes.

Let’s figure out how to sort a dozen numbers with

-calculus and sorting networks. It must stand to notice, that lambda-expression is O(1), whereas sorter network is O(n (log n)^2) in size.

Batcher’s odd–even mergesort would be O(log n) levels deep, and given one neuron is used to implement comparator, would result in O(n!) possible connections (around

per level). That we need 200 bits of insight to sort a dozen of numbers with that specific method does not mean that there is no cheaper way to do that, but sets a reasonable upper bound.

Apparently, I cannot do good lambda-calculus, but seems like we can do merge sorting of Church-encoded numerals in less than a hundred lambda-terms which is about the same amount of bits as sorting networks.

On a second note: how are Bayesian networks different from preceptrons, except fro having no thresholds?

• A is not bad because torturing a person and then restoring their initial state has precisely same consequences as forging own memory of torturing a person and restoring their initial state.

• Well, it seems somewhat unfair to judge the decision on information not available for decision-maker, however, I fail to see how is that an ‘implicit premise’.

I didn’t think Geneva convention was that old, and, actually updating on it makes Immerwahr decision score worse, due to lower expected amount of saved lives (through lower chance of having chemical weapons used).

Hopefully, roleplaying this update made me understand that in some value systems it’s worth it. Most likely, E(\Delta victims to Haber’s war efforts) > 1.

• Standing against unintended pandemics, atomic warfare and other extinction threatenting events have been quite good of an idea in retrospect. Those of us working of scientific advances shall indeed ponder the consequences.

But Immerwahr-Haber episode is just an unrelated tearjerker. Really, inventing process for creation of nitrogen fertilizers is so more useful than shooting oneself in the heart. Also, chemical warfare turned out not to kill much people since WWI, so such sacrifice is rather irrelevant.

• 9 Feb 2013 6:38 UTC
−4 points
Unless there are on order of $2^KolmogorovComplexity(Universe)$ universes, the chance of it being constructed randomly is exceedingly low.