FWIW another reason, somewhat similar to the low hanging fruit point, is that because the remaining problems are increasingly specialized, they require more years’ training before you can tackle them. I.e. not just harder to solve once you’ve started, but it takes longer for someone to get to the point where they can even start.
Also, I wonder if the increasing specialization means there are more problems to solve (albeit ever more niche), so people are being spread thinner among them. (Though conversely there are more people in the world, and many more scientists, than a century or two ago.)
In software development, a perhaps relevant kind of problem solving, extra resources in the form of more programmers working on the same project doesn’t speed things up much. My guesstimate is output = time x log programmers. I assume the main reason being because there’s a limit to the extent that you can divide a project into independent parallel programming tasks. (Cf 9 women can’t make a baby in 1 month.)
Except that if the people are working in independent smaller teams, each trying to crack the same problem, and *if* the solution requires a single breakthrough (or a few?) which can be made by a smaller team (e.g. public key encryption, as opposed to landing a man on the moon), then presumably it’s proportional to the number of teams, because each has an independent probability of making the breakthrough. And it seems plausible that solving AI threats might be more like this.
But philosophers are good at proposing answers—they all do that, usually just after identifying a flaw with an existing proposal.
What they’re not good at is convincing everyone else that their solution is the right one. (And presumably this is because multiple solutions are plausible. And maybe that’s because of the nature of proof—it’s impossible to prove something definitively, and disproving typically involves finding a counterexample, which may be hard to find.)
I’m not convinced philosophy is much less good at finding actual answers than say physics. It’s not as if physics is completely solved, or even particularly stable. Perhaps its most promising period of stability was specifically the laws of motion & gravity after Newton—though for less than two centuries. Physics seems better than philosophy at forming a temporary consensus; but that’s no use (and indeed is counterproductive) unless the solution is actually right.
Cf a rare example of consensus in philosophy: knowledge was ‘solved’ for 2300 years with the theory that it’s a ‘true justified belief’. Until Gettier thought of counterexamples.
Thanks—yes I think there is a case for having a ‘Shouldn’t’ list. As you imply, it should only be for things you know are useless/harmful, not for things you’ve merely decided not to do because they are low importance (e.g. paint the bathroom). Hence ‘shouldn’t do’ not merely ‘don’t do’.
Sometimes you can delegate things to your boss—e.g. by declining work he tries to delegate to you (say “I’m too busy”).
Thanks—glad you like it. I don’t know how the Eisenhower Box is usually taught, but from references to it online in e.g. blogs people don’t seem to question its validity. But in practice they can’t be following it that literally, e.g. they won’t be doing all the things it tells them to do and delegating all the things it tells them to delegate, etc. So I suppose they must be treating it as just a rough-and-ready guide.
Looks like there is, but they must be LessWrong members.
These agents are nothing but a stupid interface layer between me and the flight management system.
I suggest a possible term for this is MUI: Meat User Interface. The customer interacts with the MUI, and the MUI interacts with the GUI.
Yes I follow your argument, though I’m a bit doubtful about a result that produces a large difference between utility function and moral credit.
Re your Supreme Court example (and I agree this is a clearer way of thinking about it), I don’t quite follow the argument. It’s true that if the other justices had voted differently, more of them would have had to vote differently (‘flip’) had you done so, but as it’s a given that you knew how everyone else was going to vote, flipping is ruled out—their votes are set in stone.
And re ‘still each justice’s preference… matters’, I wasn’t clear if this is the same point or a separate point—i.e. a signalling or similar argument that the size of the majority matters, e.g. politically.
A little bit of altruism still seems to make it rational even if you care almost entirely about yourself—see the example calculations.
I used to think that making voting mandatory was a good solution, but nowadays I think it’s a draconian measure. Because what if you disapprove for example of the particular voting system (First Past the Post in the UK/US)? Then forcing you to comply with it, perhaps only symbolically (as you can discomply in other ways like spoiling your ballot paper—unless that will be criminalized too) is a waste of everyone’s time.
Similarly if you don’t want to vote because you are indifferent between the candidates, or think you don’t know enough about the issues to choose a candidate, etc.
Something somewhat similar to, but less draconian than, compulsory voting would be to pay people to vote, e.g. £5 / $5 in cash or vouchers as you exit the polling station. Which would also somewhat correct the current skew in turnout—poorer people are currently less likely to vote.
Having at least a plan for when to work, and being strict about that, works for me. I set alarms on my phone to work in 1 hour focussed bursts, with 15 minute breaks in between, all morning and late afternoon—it seems most people do their best focussed work in the morning; there’s also that famous violin/piano student research which indicates that the best students also practice late afternoon. I reserve early/mid afternoon for light work (admin etc.)
In addition, I suggest you have a general plan for which projects to work on during a week & month, and make a daily more specific (though not necessarily detailed) plan first thing in the morning, or (better) at the end of the work day for the next day.
Yes, I’ve been tracking my productivity daily for over 6 years. I do it using a simple iPhone app called ATracker, which lets you define projects & categories and hit a button whenever you start/stop them.
I use about a dozen categories (for different types of work & broad types of leisure, also broad locations). Every week or two I export the data into a spreadsheet and produce some pretty charts and also many metrics, e.g. about how my time usage matches up to various targets.
It’s kind of useful but I’m not that rigorous in keeping to the targets. Nonetheless if I start getting lax, then after a few days or weeks I can’t pretend it’s not happening, and the data helps nudge me back into being more productive.
By the way, I think you’re being overly ambitious aiming at 12 hours of proper deep work per day. I think it’s very hard to average more than about 6 hours per day over long periods.
If you start doing a similar kind of tracking, I’d be happy to share with you the kinds of charts, metrics etc. I produce, some of which aren’t obvious.
Yes, I found my thought processes improved dramatically recently when I stopped listening to the radio after waking in the morning, and in the shower. I now have excellent thinking & ideas at that time of day. Silence and no distractions are golden. (No wonder so many people have good ideas in the shower.)
I also recommend having a notepad by your bed. I’ve done this for years. Sometimes ideas (or things I forgot to do) occur to me shortly before going to sleep, or occasionally when waking in the night, and I write them down in the dark. It gets them out of your head, which helps you sleep too.
Thanks, I’ll read those with interest.
I didn’t think it likely that business has solved any of these problems, but I just wonder if it might include particular failures not found in other fields that help point towards a solution; or even if it has found one or two pieces of a solution.
Business may at least provide some instructive examples of how things go wrong. Because business involves lots of people & motives & cooperation & conflict, things which I suspect humans are particularly good at thinking & reasoning about (as opposed to more abstract things).
I.e. hence you can’t tell how effective a president will be from their party’s policies, because sometimes their most effective actions are following their opponents’ policies.
Yes, could be. It’s in line with the Putanumonit arguments that you just can’t tell which party will be better for the country.
I can’t think of particular instances of this in the UK, so I don’t know if this is more of a US thing. What quite often happens in the UK (particularly since Tony Blair) is parties stealing each others’ policies, even sometimes in stronger form than the other party. But presumably that’s just them trying to tempt voters across from the other side with occasional juicy little morsels. I.e. both parties converging on the median voter. [ADDED] Though similar to your point that the other party may implement your party’s policies, and perhaps more effectively, which makes it harder to predict which party would run the country better.
Yes, interesting points. I haven’t really given any thought to voting as a reward/punishment, but many voters do this. Though of course it’s mixed up with forward-looking voting, since (for many people) you vote against a politician who did something bad so that they won’t be around to do more bad things.
And politicians anticipate punishment-voting as a deterrent to them doing bad things, since there isn’t much other deterrent (except the law).
Also an interesting point re voting as reciprocation to similar voters as a kind of solidarity group. (Parties are themselves solidarity groups, but so of course are special interest groups and other supporters of particular policies.)
I’m not sure whether or how all this affects the calculus. Eliezer wrote an article on voting a while back in which if I recall his line was something like ‘it’s all too complicated to model, so just stick to simple reasoning’.
Re your pentobarbital example, this could be something where the 0.7 cents direct effect on you is bigger—though it would indeed have to be something approaching a $1 billion effect to count (since the expected benefit to you is this / 3 million, in the UK). Though that said almost all issues like this affect quite a few other people too, so altruism makes it worthwhile anyway.
I suspect that the chances of a 3rd party winning are orders of magnitude lower than a 1st or 2nd, so the expected value from you having the deciding vote would be too small. But in terms of policy influence, if the 3rd party does unusually well (without winning), I agree that can be significant. Indeed I recall an example of this happening in the UK in the 1990s, when in one national election the Green party (then the 4th or 5th party) did unexpectedly well, albeit still only getting a few % of the vote, which immediately made the major parties start saying how important the environment was and announcing new policies.
Yes, but since on my numbers the benefits of voting are so huge, a tiny difference between parties can still justify it. E.g. near the end of the article I calculate that in a UK general election, if the difference between the two main parties equals 10% of government spending (in benefit to the country, not necessarily actual spend), that equals 7% of Brexit or about $7,000 to a marginal voter.
So even if it’s only worth 0.1% of government spending (e.g. a small confidence that one party will make a small execution improvement on a few policies), that’s $70 - enough to justify voting.
[Response substantially edited:]
If I understand you right, you’re saying that if Remain were to happen then Leavers would incur a large actual loss (relative to the Leave scenario), because they reckon the benefits of leaving in terms of social cohesion, security etc. will not occur.
Perhaps those aren’t the best examples, as arguably those are matters of fact, so Leavers could find out they were wrong if it turns out there is no loss in social cohesion & security by remaining; so they wouldn’t necessarily lose utils. A better example might be national self-determination, which a Leave supporter would value come what may, and a Remain supporter might put little value on. That is, Leavers aren’t merely predicting that leaving the EU would make things better for the UK, they are expressing a (non-falsifiable) preference for being out of the EU.
I haven’t thought of that, and that could be so—or perhaps more likely it’s a mixture of prediction and preference. In which case Leavers would only get some negative utils, still leaving tens of thousands of $ per extra Remain voter. (And still plenty enough to justify voting even after major shrinkage by the uncertainty that policies will turn out/be implemented as expected.)
Complicated by the fact that if Remain happens, Leave supporters would always feel things would have been better if Leave had happened, even if their predictions were unknowingly false, because they never get to try out & compare both scenarios. I.e. Leavers will never be satisfied if the UK remains, and Remainers will never be satisfied if the UK leaves, regardless of how the other possible world would have been. (Maybe that’s your main point here.) But I reckon that dissatisfaction is small compared with the economic harm caused by leaving (if the median GDP predictions are true).
By the way, I’m not convinced voting is rational (hence I have never voted in my life), and believed that it wasn’t, until the altruism calculation occurred to me a year or so ago. My current suspicion is about the validity of multiplying a very small probability by a very large benefit to get a justification; but I haven’t yet read/thought of a strong argument against this.
(PS Ah, you’re Jacob F—good to meet you! I enjoy your blog.)