wondering if the community here thought Hume was an idiot
Just searched old posts, and apparently at least one person on LW thought Hume was a candidate for the Greatest Philosopher in History. That’s an obscure post with only one upvote though, so can’t be considered representative of the community’s views.
In general I think this community tends to be not too concerned with evaluating long-dead philosophers, and instead prefers to figure out what we can, informed by all the knowledge we currently have available from across scientific disciplines.
Historical philosophers may have been bright and made good arguments in their time. But they were starting from a huge disadvantage to us, if they didn’t have access to a modern understanding of evolution, cognitive biases, logic and computability, etc.
For a fairly representative account of how LW-ers view mainstream philosophy, see: Less Wrong Rationality and Mainstream Philosophy and Philosophy: A Diseased Discipline.
wondering if the community here thought… the latest findings about emotions being a necessary part of decision-making horrifying
I’m not sure exactly what you’re referring to. But in general I think the community is pretty on-board with thinking that there’s a lot that our brains do besides explicit verbal deductive reasoning, and that this is useful.
And also that you’ll reason best if you can set up a sort of dialogue between your emotional, intuitive judgments and your explicit verbal reasoning. Each can serve as a check on the other. Neither is to be completely trusted. And you’ll do best when you can make use of both. (See Kahneman’s work on System 1 and System 2 thinking.)
I’m looking for an old post where Eliezer makes the basic point that we should be able to do better than intellectual figures of the past, because we have the “unfair” advantage of knowing all the scientific results that have been discovered since then.
I think he cites in particular the heuristics and biases literature as something that thinkers wouldn’t have known about 100 years ago.
I don’t remember if this was the main point of the post it was in, or just an aside, but I’m pretty confident he made a point like this at least once, and in particular commented on how the advantage we have is “unfair” or something like that, so that we shouldn’t feel at all sheepish about declaring old thinkers wrong.
Anybody know what post I’m thinking of?
“Which are relevant, and which are most important?”
That’s precisely the subjective part.
They could be objective, given a context. Now the choice of context may be a matter of taste or preference. But given a context that we want to ask questions about, we might be able to get objective answers. (E.g. will this hypothetical future person think like me?)
But agree that some subjectivity is involved somewhere in the process.
Romeo Stevens, the king of pith.
Ah, maybe I misunderstood what you meant when you said you would throw it away. I thought maybe you meant you’d discard it in favor of some other preferred theory. Or in favor of whatever you believed in before you learned about patternism.
And depending on what those theories are, that seemed like it might be a bad move, from my perspective.
But if instead your attitude is more like picking up a book, only to find out the author only got half way through writing it, and you’re going to set it aside until it’s done so you can read the whole story, then it seems to me like there’s nothing wrong with that.
but for now is the superiority of subjective measuring the viewpoint I’ll accept
I didn’t follow this. You’re saying for now you’re leaning towards a subjective measuring viewpoint? Which one?
I’m willing to give up on trying to find some impartial way of measuring this
Depending on what you mean by “impartial”, I might agree that that’s the right move. But I think a good theory might end up looking more like special relativity, where time, speed, and simultaneity are observer-dependent (rather than universal), but in a well-defined way that we can speak precisely about.
I assume personal identity will be a little more complicated than that, since minds are more complicated than beams of light. But just wanted to highlight that as an example where we went from believing in a universal to one that was relative, but didn’t have to totally throw up our hands and declare it all meaningless.
I’m at a loss to how you could build on it honestly.
FWIW, if I were to spend some time on it, I’d maybe start by thinking through all the different ways that we use personal identity. Like, how the concept interacts with things. For example, partly it’s about what I anticipate experiencing next. Partly it’s about which beings’ future experiences I value. Partly it’s about how similar that entity is to me. Partly it’s about how much I can control what happens to that future entity. Partly it’s about what that entities memories will be. Etc, etc.
Just keep making the list, and then analyze various scenarios and thought experiments and think through how each of the different forms of personal identity applies. Which are relevant, and which are most important?
Then maybe you have a big long list of attributes of identity, and a big long list of how decision-relevant they are for various scenarios. And then maybe you can do dimensionality reduction and cluster them into a few meaningful categories that are individual amenable to quantification (similar to how the Big 5 personality system was developed).
That doesn’t sound so hard, does it? ;-)
Hmm, are you thinking of the theory of patternism as something other than the claim that 1) it’s your pattern of atoms (and how they interact with each other and the rest of the world) that’s relevant for determining your behavior, including your internal experience, and 2) that there’s no metaphysical personal identity other than what arises from the relationship of these patterns to each other over time?
It seems to me that this predicts that we won’t in the future discover some way to determine which of two copies of a person is “the original”.
If you claim instead that this is not a prediction, but just a restatement of patternism, then maybe that’s a valid criticism—that patternism is not a theory but just a claim. But then I wouldn’t want to through that claim away! Because I expect it to be true.
If all you mean to be saying is that it’s incomplete, then I don’t disagree. But you described throwing it away, which seems to me like not what you’d want to do with our best theory so far. Rather you’d want to build on, refine, or expand it.
Unless you think there’s a better foundation to build on?
how does one person retain their identity despite changes in material composition
I would answer: imperfectly, partially, and non-platonicly. :-)
EDIT: I think I may have missed your point though. Because I’m not sure which part of my comment you’re responding to.
I can know that heat comes from particles moving fast w/o having a full understanding of thermodynamics. It seems like maybe we’re in a similar situation with patternism.
I think all the questions you asked in the post are legitimate. But the fact that they can be asked doesn’t seem like much of a critique of the basic idea. (At least, of the version of patternism that I have in my head.)
These questions don’t make me at all tempted to go back to saying that I am my this particular collection of atoms, or something like that. But I am happy to admit that patternism is an incomplete explanation of personal identity.
The fact that personal identity remains not totally solved should not be too controversial of an idea around here (see for example #s 10 and 12 in Wei Dai’s list of open problems in rationality).
Which I think is a pretty heavy blow for patternism.
I was with you right up until this conclusion. Seems like a non-sequitur.
Perhaps you are assuming that there has to be a single, objective, context-independent way to say whether two people are the same. Or exactly how different they are. And since patternism doesn’t obviously admit that, you conclude it can’t be right.
But to me, it seems like the common pattern for just about everything in life is that categories are blurry. We start with a naive, folk view that assumes things are crisp and clear, and as we learn more we realize that categories are not platonic, but represent clusters in thingspace.
If a view fits this common pattern, that seems to me like a point in its favor, not a point against. In other words, I’m a bit skeptical of any philosophical account that seems too platonic. Unless you’re dealing with very simple mathematical structures, there are almost always rough edges. And philosophical views should be realistic about this.
Thanks! I’ve never used Lisp, but all the parentheses strike me as unappealing. Maybe I would like it if I tried it though.
It’s another take on “Haskell with a Lisp syntax”.
Would you actually prefer Hakell with Lisp syntax? Or is that not the point, and you just wanted a project that would teach you various things.
Kelly should be applied to one’s total wealth, including the value of future income (see: Lifecycle Investing). Taking future income into account, my Tesla position is a smaller share of the total. Additionally, I want to target something like 2x leverage (Lifecycle Investing, again), so 40% of my net worth is only 20% of what I’d want to allocate to the market.
That said, 40% might still be too much. I haven’t rebalanced after the recent run-up, and I have a pending to-do to calculate my estimate of the expected returns and variance, and then adjust accordingly. I’m not sure which way that will come out.
Is BYD able to achieve cheaper cost per kWh for their battery packs than Tesla?
Or do you expect Tesla to not be willing to sacrifice margins in the future in order to produce at high volume (contrary to the their stated plans)?
See also my Arbital claim from Dec, 2016, and my FB post from April, 2019.
They just barely became profitable recently. I don’t think P/E is super informative right now.
Like Amazon, they reinvest basically all of what they take in in growth, so don’t end up with much in the way of profit. (Historically they’ve invested more than 100% of what they’ve taken in in growth, relying on selling equity. Recently it’s started to be slightly less.)
So, I think you want to look at revenue, and project revenue growth and future gross margin to get the equivalent of a P/E ratio.
I assume that all the major manufacturers have electric cars for sale by 2025, and they’re much more able to ramp up production than Tesla is because of their existing factories.
It’s not clear to me that it’s easy to retool a factory from making ICE cars to making electric (w/ 2000 vs 20 moving parts in their drivetrains, respectively). Perhaps it’s better than starting with nothing, but it’s still going to be a huge cost.
Waymo (ie. Google) has a significant lead on Tesla, and would probably sell to all the other car manufacturers. Plus the last few years have shown that autonomous driving is very difficult to get to the truly valuable level.
I agree that Tesla does not have a clear lead over Waymo (and others). My rough impression (having not looked into it in detail recently) is that Waymo and Cruise are able to achieve a higher level of autonomy in more narrowly scoped areas, whereas Tesla achieves lower levels of autonomy, but across its whole fleet worldwide, w/ no geographic limitations.
However, one advantage Tesla has is that it’s already booking revenue from selling its full self-driving package. When people choose the 7k full self-driving option [edit: announced today that it’ll be 8k starting July 1st], that’s cash that hits Tesla’s accounts right away. And then they recognize the revenue over time, as they ship more pieces of the package. One might also expect the take rate of the package to increase over time, as “full self-driving” becomes less of a promise and more of an actuality.
And every car they sell (even those w/o the paid full self-driving option) sends data home to train their neural nets.
So, in two ways, Tesla has a continuous path from here to full autonomy: 1) financially they’re able to incrementally profit more and more (w/ software-like margins) off of progress towards full autonomy, and 2) they’re able to ship incremental features to their fleet, and iterate w/ huge (and by far market-leading) amounts of real-world data.
So there don’t have to be any discontinuous leaps in progress for Tesla to get to full autonomy. They just have to keep hill climbing (or descending the gradient, if you prefer).
Their margins are currently riding on not having significant competition, which is changing quickly and the rate of change is increasing. Electric cars are currently a luxury choice, thus having a high margin. But once you are trying to sell to the entire US market, prices are going to have to drop, and margins will drop with them.
I’m skeptical that the competition will be too much of a challenge. It’s the classic Innovator’s Dilemma, as Liron mentioned. Tesla’s batteries have the lowest cost per kWh of any manufacturer, because it’s been one of their core competencies and they’ve focused on it obsessively. Other manufacturers will have to dump huge amounts of money into R&D and reworking their supply chains and factories to become competitive. All while losing money in the meantime because their costs are so much higher.
And other manufacturers (at least the US ones), are not in great financial positions to make that kind of investment right now.
It’s hard to imagine other manufacturers truly competing with Tesla in the near term, unless they go all in w/ a bet-the-company approach. Presumably some will, once it’s clear that that’s their only hope. But I wouldn’t be too surprised if many of those end up as the Blackberry or Nokia to Tesla’s iPhone.
(One additional Innovator’s Dilemma type factor preventing incumbents from switching to the new approach, is that, at least in the US, manufacturers have relationships w/ dealerships that legally require them to only sell through those dealerships. And dealerships have traditionally made most of their money through service. But electric cars need less service, due to having fewer moving parts. So dealers will be less inclined to sell the new EVs and will be a source of friction for manufacturers wanting to switch over from making ICE cars to making EVs.)
While I do think that Tesla can sell more than they currently are, the stock price already has that very factored in. Their market cap is $150B right now, which, assuming an insane $10k profit per car and a normal car manufacturer PE ratio of 10, shows sales of 1.5M cars/year, a 10x ramp up in car sales, again with an insane profit margin.
While I agree that big gains are priced in, do note that: 1) Tesla has consistently grown revenue at 50% per year since 2013 (first full year of Model S sales), and 2) The autonomy features they’re already selling are a big boost to margins and will become a bigger boost over time (as they recognize more of the sales as revenue, and as the take rate increases).
So, in the medium term, one might expect Tesla to have margins somewhere in between those of a traditional auto company and those of a software company. (In the long term, their margins will depend on whether they capture a large share of the transportation-as-a-service market, and if so, whether the winners of that market end up with commodity-like margins or monopoly-like margins. My guess is the latter, but I’m not super confident about how that will play out.)