Erich_Grunewald(Erich Grunewald)
Thanks for the suggestion; I wasn’t aware of Robert Kegan or his work.
I suppose I may as well take the opportunity to point people to my essay Networks of Meaning (or on my website) which covers some of the same ground, including connections with association and analogy. It may be a good complement to this post.
I don’t think this is fair. Advice is usually given when requested. In fact, people often don’t like receiving unsolicited advice. I’m sure people would be fine with advertisement if it was opt-in.
I didn’t mean to imply that advice is always given with consent. I just meant that it is so to a far larger degree than advertisement, and that that is an important difference.
Even when advice is unsolicited (your intervention example is a good one) it is usually done with the intention of doing something good for the recipient. I think advertisement is usually carried out with the intention to benefit the advertiser. Again, I’m not saying it’s always black and white. But I think there are pretty clear differences between these two activities on average.
I know the author and the blog but didn’t know the paper, thanks!
Thanks for commenting. I haven’t written about anything like that because my thoughts about it are rudimentary at best! I think you’re correct that these speculations are premised on some sort of moral realism (if I understand you correctly). To be clear, I really don’t know whether moral realism or anti-realism is more plausible. Just from a very shallow knowledge of metaethics, i think something like constructivism seems most plausible to me, but I’m not sure about how that maps onto the realism/anti-realism question.
Indeed, see the 5th footnote!
This is excellent, thanks!
I see a minor bug with the hover view, e.g. in the second footnote of this post—perhaps it has something to do with the whole footnote content being a hyperlink?
That’s interesting, because, as you say, I think most layouts assume this prioritisation:
Consecutive taps with different hands > With different fingers on same hand > With same finger
If the middle one there is really bad for you, and if you know some programming, I think you could run the Carpalx code, assign extra (negative?) weight to the “hand runs” parameter and see if it can generate a layout that suits your needs. (I haven’t tried this myself.)
I think the Carpalx website was down for a spell this morning; the link didn’t work for me a few hours ago, but works now. Try again?
Thanks! coincidentally, I was just reading his post on nuclear deterrence, but I haven’t read his stuff on rome. I should have figured that he covered this in depth.
It seems to me that providing a confidence level is mainly beneficial in allowing you and me to see how well calibrated your predictions are.
It’s also just a way to communicate epistemic status, right?
Providing a confidence level for counterfactual statements about history gives me virtually no information unless I already have a well-formed prior about your skill at historical counterfactual analysis, which, for the same reasons, I can’t really have.
Not 100% sure i follow you, but I guess the idea was to communicate an estimate of how strong the causal influence is. Like, maybe Rome’s location caused it to grow economically early on, but had little impact on its ability to expand militarily after that (it would have expanded anyway). If I thought so, my confidence in the first of these claims would have stayed the same, but my confidence in the second would have been much lower:
It first outgrew the other Latin cities mainly due to its location as a nexus and its proximity to the wealthy Etruscans (75% confidence). If this is not the case, Rome does not get hegemony over Latium (70% confidence).
That said, I guess you’re right that it’s not that informative. The two claims are likely to be correlated. I’ll consider not giving confidences for such counterfactuals next time.
One of my favourites is “paragrafryttare”, loaned from the German “prinzipienreiter”, basically “principle rider” or “principle knight”, used to denote someone who very rigidly applies rules, maxims, laws and principles. Of course English has “pedant” which has a similar feel to it, but is more about very rigidly focusing on insignificant things while ignoring more important things.
I’d like to hear more about this. I’m not sure I understand the term “moral strategy” when it’s perfectly aligned with optimal personal outcomes (ability to out-compete other strategies). If it’s an optimal strategy, why do you need to label it “moral” or “immoral”?
What I mean by “moral strategy” is a strategy that’s recommended by an observer’s moral system. I think that isn’t necessarily the strategy that’s optimal, at least for non-consequentialist ethics. (An immoral strategy would be any strategy that’s prohibited by that moral system.) If there are a bunch of prisoner’s dilemma-type games happening out there in the world, and they tend towards an equilibrium where people are using strategies that aren’t recommended or are even prohibited (according to some observer’s ethics), then that’s bad (according to that observer) even if the outcomes are optimal.
I think usually Transformative AI.
This is really terrific!
As for the first bullet point, it basically goes like this: If what you are about to do isn’t something you could will to be a universal law—if you wouldn’t want other rational agents to behave similarly—then it’s probably not what the Optimal Decision Algorithm would recommend you do, because an app that recommended you do this would either recommend that others in similar situations behave similarly (and thus lose market share to apps that recommended more pro-social behavior, the equivalent of cooperate-cooperate instead of defect-defect) or it would make an exception for you and tell everyone else to cooperate while you defect (and thus predictably screw people over, and lose customers and then eventually be outcompeted also.)
I think it’s even simpler than that, if you take the Formula of Universal Law to be a test of practical contradiction, e.g. whether action X could be the universal method of achieving purpose Y. Then it’s really obvious why a central planner could not recommend action X—because it would not achieve purpose Y. For example, recommending lying doesn’t work as, if it were a universal method, no one would trust anyone, so it would be useless.
Yes, exactly. To me it makes perfect sense that an Optimal Decision Algorithm would follow a rule like this, though it’s not obvious that it captures everything that the other two statements (the Formula of Humanity and the Kingdom of Ends) capture, and it’s also not clear to me that it was the interpretation Kant had in mind.
Btw, I can’t take credit for this—I came across it in Christine Korsgaard’s Creating the Kingdom of Ends, specifically the essay on the Formula of Universal Law, which you can find here (pdf) if you’re interested.
This is interesting, though I expect it’s an upper bound on Copilot productivity boosts:
Writing an HTTP server is a common, clearly defined task which has lots of examples online.
JavaScript is a popular language (meaning there’s lots of training data for Copilot).
I imagine Copilot is better for building a thing from ground up, whereas the programming most programmers do most days consists in extending, modifying and fixing existing stuff, meaning more thinking and reading and less typing.
Honestly I think the whole “build from ground up”/”extending, modifying, and fixing” dichotomy here is a little confused though. What scale are we even talking?
I meant to capture something like “lines of code added/modified per labour time spent”, and to suggest that Copilot would reap more benefits the higher that number is (all else equal).
Note that Europe & the EU both have significantly higher median ages than does the U.S. (~42 & 42.6 versus 38.1).