I think I’ve commented on your newsletters a few times, but haven’t comment more because it seems like the number of people who would read and be interested in such a comment would be relatively small, compared to a comment on a more typical post.
I am surprised you think this. Don’t the newsletters tend to be relatively highly upvoted? They’re one of the kinds of links that I always automatically click on when I see them on the LW front page.
Maybe I’m basing this too much on my own experience, but I would love to see more discussion on the newsletter posts.
For freedom-as-arbitrariness, see also: Slack
If your car was subject to a perpetual auction and ownership tax as Weyl proposes, bashing your car to bits with a hammer would cost you even if you didn’t personally need a car, because it would hurt the rental or resale value and you’d still be paying tax.
I don’t think this is right. COST stands for “Common Ownership Self-Assessed Tax”. The self-assessed part refers to the idea that you personally state the value you’d be willing to sell the item for (and pay tax on that value). Once you’ve destroyed the item, presumably you’d be willing to part with the remains for a lower price, so you should just re-state the value and pay a lower tax.
It’s true that damaging the car hurts the resale value and thus costs you (in terms of your material wealth), but this would be true whether or not you were living under a COST regime.
Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have any effect on one’s ability to make a good social impression or even to “seem smart” in conversation.
That section of Sarah’s post jumped out at me too, because it seemed to be the opposite of my experience. In my (limited, subject-to-confirmation-bias) experience, how smart someone seems to me in conversation seems to match pretty well with how they did on standardized tests (or other measures of academic achievement). Obviously not perfectly, but way way better than chance.
I would also expect that courtesy of things like Dunning-Kruger, people towards the bottom will be as bad at estimating IQ as they are competence at any particular thing.
FWIW, the original Dunning-Kruger study did not show the effect that it’s become known for. See: https://danluu.com/dunning-kruger/
In two of the four cases, there’s an obvious positive correlation between perceived skill and actual skill, which is the opposite of the pop-sci conception of Dunning-Kruger.
I’m not totally sure I’m parsing this sentence correctly. Just to clarify, “large firm variation in productivity” means “large variation in the productivity of firms” rather than “variation in the productivity of large firms”, right?
Also, the second part is saying that on average there is productivity growth across firms, because the productive firms expand more than the less productive firms, yes?
Not sure exactly what you mean by “numerical simulation”, but you may be interested in https://ought.org/ (where Paul is a collaborator), or in Paul’s work at OpenAI: https://openai.com/blog/authors/paul/ .
Just had a call with Nick Bostrom who schooled me on AI issues of the future. We have a lot of work to do.
This same candidate (whom the markets currently give a 5% chance of being the Democratic nominee) also wants to create a cabinet-level position to monitor emerging technology, especially AI:
Advances in automation and Artificial Intelligence (AI) hold the potential to bring about new levels of prosperity humans have never seen. They also hold the potential to disrupt our economies, ruin lives throughout several generations, and, if experts such as Stephen Hawking and Elon Musk are to be believed, destroy humanity.
...As President, I will…* Create a new executive department – the Department of Technology – to work with private industry and Congressional leaders to monitor technological developments, assess risks, and create new guidance. The new Department would be based in Silicon Valley and would initially be focused on Artificial Intelligence.* Create a new Cabinet-level position of Secretary of Technology who will be tasked with leading the new Department.* Create a public-private partnership between leading tech firms and experts within government to identify emerging threats and suggest ways to mitigate those threats while maximizing the benefit of technological innovation to society.
It seems to me that perhaps the major difference between active/concentrated curiosity and open/diffuse curiosity is how much of an expectation you have that there’s one specific piece of information you could get that would satisfy the curiosity. (And for this reason the “concentrated” and “diffuse” labels do seem somewhat apt to me.)
Active/concentrated curiosity is focused on finding the answer to a specific question, while open/diffuse curiosity seeks to explore and gain understanding. (And that exploration may or may not start out with its attention on a single object/emotion/question.)
See also my comment here on non-exploitability.
Nitpick: I think the intro example would be clearer if there were explicit numbers of grapes/oranges rather than “some”. Nothing is surprising about the original story if Beatriz got more oranges from Deion than she gave up to Callisto. (Or gave away fewer grapes to Deion than she received from Callisto.)
Unless I missed it, neither this comment nor the main post explains why you ultimately decided in favor of karma notifications. You’ve listed a bunch of cons—I’m curious what the pros were.
Was it just an attempt to achieve this?
I want new users who show up on the site to feel rewarded when they engage with content
Great long-form interview with Andrew Yang here: Joe Rogan Experience #1245 - Andrew Yang.
Did you make any update regarding the simplicity / complexity of value?
My impression is that theoretical simplicity is a major driver of your preference for NU, and also that if others (such as myself) weighed theoretical simplicity more highly that they would likely be more inclined towards NU.
In other words, I think theoretical simplicity may be a double crux in the disagreements here about NU. Would you agree with that?
Meta-note: I am surprised by the current karma rating of this question. At present, it is sitting at +9 points with 7 votes, but it would be at +2 with 6 votes w/o my strong upvote.
To those who downvoted, or do not feel inclined to upvote—does this question not seem like a good use of LW’s question system? To me it seems entirely on-topic, and very much the kind of thing I would want to see here. I found myself disagreeing with much of the text, but it seemed to be an honest question, sincerely asked.
Was it something about the wording (either of the headline or the explanatory text) that put you off?
Relatedly: shorter articles don’t need to be as well-written and engaging for me to actually read to the end of them.
I suspect, though, that there is wide variation in willingness to read long posts, perhaps explained (in part) by reading speed.
If the rationality and EA communities are looking for a unified theory of value
Are they? Many of us seem to have accepted that our values are complex.
Absolute negative utilitarianism (ANU) is a minority view despite the theoretical advantages of terminal value monism (suffering is the only thing that motivates us “by itself”) over pluralism (there are many such things). Notably, ANU doesn’t require solving value incommensurability, because all other values can be instrumentally evaluated by their relationship to the suffering of sentient beings, using only one terminal value-grounded common currency for everything.
This seems like an argument that it would be convenient if our values were simple. This does not seem like strong evidence that they actually are simple. (Though I grant that you could make an argument that it might be better to try to achieve only part of what we value if we’re much more likely to be successful that way.)
FWIW, I was thinking of the related relationship as a human-defined one. That is, the author (or someone else?) manually links another question as related.
Q&A in particular is something that I can imagine productively scaling to a larger audience, in a way that actually causes the contributions from the larger audience to result in real intellectual progress.
Do you mean scaling it as is, or in the future?
I think there’s a lot of potential to innovate on the Q&A system, and I think it’d be valuable to make progress on that before trying to scale. In particular, I’d like to see some method of tracking (or taking advantage of) the structure behind questions—something to do with how they’re related to each other.
Maybe this is as simple as marking two questions as “related” (as I think you and I have discussed offline). Maybe you’d want more fine-grained relationships.
It’d also be cool to have some way of quickly figuring out what the major open questions are in some area (e.g. IDA, or value learning), or maybe what specific people consider to be important open questions.