So your intent here is to diagnose the conceptual confusion that many people have with respect to infinity yes? And your thesis is that: people are confused about infinity because they think it has a unique referant while in fact positive and negative infinity are different?
I think you are on to something but it’s a little more complicated and that’s what gets people are confused. The problem is that in fact there are a number of different concepts we use the term infinity to describe which is why it so super confusing (and I bet there are more).
1. Virtual Points that are above or below all other values in an ordered ring (or their positive component) which we use as shorthand to write limits and reason about how they behave.
2. The background idea of the infinite as meaning something that is beyond all finite values (hence why a point at infinity is infinite).
3. The cardinality of sets which are bijectable with a proper subset of themselves, i.e., infinite. Even here there is an ambiguity between the sets with a given cardinality and the cardinal itself.
4. The notion of absolute mathematical infinity. If this concept makes sense it does have a single reference which is taken to be ‘larger’ (usually in the sense of cardinality) than any possible cardinal, i.e. the height of the true hierarchy of sets.
5. The metaphorical or theological notion of infinity as a way of describing something beyond human comprehension and/or without limits.
The fact that some of these notions do uniquely refer while others don’t is a part of the problem.
Stimulants are an excellent short term solution. If you absolutely need to get work done tonight and can’t sleep amphetamine (i.e. Adderall) is a great solution. Indeed, there are a number of studies/experiments (including those the airforce relies on to give pilots amphetamines) backing up the fact that it improves the ability to get tasks done while sleep deprived.
Of course, if you are having long term sleep problems it will likely increase those problems.
There is a lot of philosophical work on this issue some of which recommends taking conditional probability as the fundamental unit (in which case Bayes theorem only applies for non-extremal values). For instance, see this paper
Computability is just \Delta^0_1 definability. There are plenty of other notions of definability you could try to cash out this paradox in terms of. Why pick \Delta^0_1 definability?
If the argument worked in any particular definability notion (e.g. arithmetic definability) it would be a problem. Thus, the solution needs to explain why the argument shouldn’t convince you that with respect to any concrete notion of definable set the argument doesn’t go through.
But that’s not what the puzzle is about. There is nothing about computability in it. It is supposed to be a paradox along Russell’s set of all sets that don’t contain themselves.
The response about formalizing exactly what counts as a set defined by an English sentence is exactly correct.
Yah, enumerable means something different than computably enumerable.
This is just the standard sleeping beauty paradox and I’d suggest that the issue isn’t unique to FNC.
However, you are a bit quick in concluding it is time inconsistent as it’s not altogether clear that one is truly referring to the same event before and after you have the observation. The hint here is that in the standard sleeping beauty paradox the supposed update involves only information you already were certain you would get.
Id argue that what’s actually going on is that you are evaluating slightly different questions in the two cases
Don’t. At least outside of Silicon Valley where oversubscription may actually be a problem. It’s a good intention but it inevitably will make people worry they aren’t welcome or aren’t the right sort of people . Instead, describe what one does or what one talks about in a way that will appeal to the kind of people who would enjoy coming
Given that you just wrote a whole post to say hi and share your background with everyone I’m pretty confident you’ll fit right in and won’t have any problems being too shy. Writing a post like this rather than just commenting is such a less wrong kind of thing to do so I think you’ll be right at home.
Searle can be any X?? WTF? That’s a bit confusingly written.
The intuition Searle is pumping is that since he, as a component of the total system doesn’t understand Chinese it seems counterintuitive to conclude that the whole system understands Chinese. When Searle says he is the system he is pointing to the fact that he is doing all the actual interpretation of instructions and is seems weird to think that the whole system has some extra experiences that let it understand Chinese even though he does not. When Searle uses the word understand he does not mean demonstrate the appropriate input output behavior he is presuming it has that behavior and asking about the system’s experiences.
Searle’s view from his philosophy of language is that our understanding and mening is grounded in our experiences and what makes a person count as understanding (as opposed to merely dumbly parroting) Chinese is that they have certain kinds of experiences while manipulating the words. When Searle asserts the room doesn’t understand Chinese he is asserting that it doesn’t have the requisite experiences (because it’s not having any experiences) that someone would need to have to count as understanding Chinese.
Look, I’ve listened to Searle explain this himself multiple times during the 2 years of graduate seminars on philosophy of mind I took with him and have discussed this very argument with him at some length. I’m sorry but you are interpreting him incorrectly.
I know I’m not making the confusion you suggest because I’ve personally talked with him at some length about his argument.
I essentially agree with you that science can’t bridge the is-ought gap (see caveats) but it’s a good deal more complicated than the arguments you give here allow for (they are a good intro but I felt it’s worth pointing out the complexities).
When someone claims to have bridged the is-ought gap they aren’t usually claiming to have analytically identified (i.e. identified as a matter of definition) ought with some is statements. That’ s a crazily high bar and modern philosophers (and Sam Harris was trained as a philosopher) tend to feel true analytic identities are rare but are not the only kind of necessary truths. For instance, the fact that “water is H20” is widely regarded as a necessary truth that isn’t analytic (do a search if you want an explanation) and there are any number of other philosophical arguments that are seen as establishing necessary truths which don’t amount to the definitional relationship you demand.
I think the standard Harris is using is much weaker even than that.
You insist that to be an ought it must be motivating for the subject. This is a matter of some debate. Some moral realists would endorse this while others would insist that it need only motivate certain kinds of agents who aren’t too screwed up in some way. But, I tend to agree with your conclusion just suggest it be qualified by saying we presuming the standard sense of moral realism here.
One has to be really careful with what you mean by ‘science’ here. One way people have snuck around the is-ought gap before is using terms like cruel which are kinda ‘is’ facts that back in an ought (to be cruel requires that you immorally inflict suffering etc..).
It’s not that Harris is purely embedded in some kind of dialectical tradition. He was trained as an analytic philosopher and they invented the is-ought gap and are no strangers to the former mode of argumentation. IT’s more that Carrol is a physicist and doesn’t know the terminology that would let him pin Harris down in terms he would understand and keep him from squirming off the point.
However, I’m pretty sure (based on my interaction with Harris emailing him over what sounded like a similarly wrongheaded view in the philosophy of mind) that Harris would admit that he hasn’t bridged Hume’s is-ought gap as philosophers understand it but instead explain that he means to address the general public’s sense that science has no moral insight to offer.
In that sense I think he is right. Most people don’t realize how much science can inform our moral discussions...he’s just being hyperbolic to sell it.
I agree with your general thrust except your statement that “you longtermists can simply forgo your own pleasure wireheading and instead work very hard on the whole growth and reproduction agenda” if we are able to wirehead in an effective manner it might be morally obligatory to force them into wireheading to maximize utility.
Also, your concern about some kind of disaster caused by wireheading addiction and resulting deaths and damage is pretty absurd.
Yes, people are more likely to do drugs when they are more available but even if the government can’t limit the devices that enable wireheading from legal purchase it will still require a greater effort to put together your wireheading setup than it currently does to drive to the right part of the nearest city (discoverable via google) and purchasing some heroin. Even if it did become very easy to access it’s still not true that most people who have been given the option to shoot up heroin do so and the biggest factor which deters them is the perceived danger or harm. If wireheading is more addictive/harmful it will discourage use.
Moreover, for wireheading to pose a greater danger than just going to buy heroin it would have to give greater control over brain stimulation (i.e. create more pleasure etc..) and the greater our control over the brain stimulation the greater the chance we can do so in a way that doesn’t create damage.
Indeed, any non-chemical means of brain stimulation is almost certain to be crazily safe because once monitoring equipment detects a problem you can simply shut off the intervention without the concern of long-halflife drugs remaining in the system continuing the effect.
You make a lot of claims here that seem unsupported and based on nothing but vague analogy with existing primitive means of altering our brain chemisty. For instance a key claim that pretty most of your consequences seem to depend on is this: “It is great to be in a good working mood, where you are in the flow and every task is easy, but if one feels “too good”, one will be able only to perform “trainspotting”, that is mindless staring at objects.
Why should this be true at all? The reason heroin abusers aren’t very productive (and, imo, heroin isn’t the most pleasurable existing drug) is because of the effects opiates have as depressants making them nod off etc.. The more control we achieve over brain stimulation the less likely wireheading will have the kind of side-effects which limit functioning. Now one might have a more subtle argument that suggests the ability of even a directly stimulated brain to feel pleasure will be limited and thus if we directly stimulate too much pleasure we will no longer have the appropriate rewards to incentivize work but it seems equally plausible that we will be able to seperate pleasure and motivation/effort and actually enhance our inclination to work while instilling great pleasure.
Skimming the paper I’m not at all impressed. In particular, they make frequent and incautious use of all sorts of approximations that are only valid up to a point under certain assumptions but make no attempt to bound the errors introduced or justify the assumptions.
This is particularly dangerous to do in the context of trying to demonstrate a failure of the 2nd law of thermodynamics as the very thought experiment that might be useful will generally break those heuristics and approximations. Worse, the 2nd law is only a statistical regularity not a true exceptionless regularity so what one actually needs to show.
Even worse this piece seems to be trying to use a suspicious mixture of quantum and classical notions, e.g., using classical notions to define a closed system then analyzing it as a quantum system
Not everyone believes that everything is commesurable and people often wish to be able to talk about these issues without implicitly presuming that fact.
Moreover, values suggests something that is desirable because it is a moral good. A priority can be something I just happen to selfishly want. For instance, I might hold diminishing suffering as a value yet my highest current priority might be torturing someone to death because they killed a loved one of mine (having that priority is a moral failing on my part but doesn’t make it impossible).
First, as a relatively in shape person who walks a ton (no car living in the midwest) I can attest that I often wish I had a golf cart/scooter solution. They don’t need to be a replacement for walking (though good that they can be) they might also appeal to those of us who like to walk a lot but need a replacement for a car when it gets really hot or we need to carry groceries (motorcycle style scooters require licenses and can’t always be driven on campuses or parks). It would be great if these became less socially disapproved of for the non-disabled.
Second, aren’t there stable, fast scooters with decent torque and larger battery packs? Why do you think the crappy scooter will become super popular? Is it that much cheaper? Or are you just saying even a crappy scooter provides these advantages.
Except if you actually go try and do the work people’s pre-theoretic understanding of rationality doesn’t correspond to a single precise concept.
Once you step into Newcomb type problems it’s no longer clear how decision theory is supposed to correspond to the world. You might be tempted to say that decision theory tells you the best way to act...but it no longer does that since it’s not that the two-boxer should have picked one box. The two-boxer was incapable of so picking and what EDT is telling you is something more like: you should have been the sort of being who would have been a one boxer not that *you* should have been a one boxer.
Different people will disagree over whether their pre-theoretic notion of rationality is one in which it is correct to say that it is rational to be a one/two boxer. Classic example of working with a imprecisely defined concept.
First, let me say I 100% agree with the idea that there is a problem in the rationality community of viewing rationality as something like momentum or gold (I named my blog rejectingrationality after this phenomena and tried to deal with it in my first post).
However, I’m not totally sure everything you say falls under that concept. In particular, I’d say that rationality realism is something like the belief that there is a fact of the matter about how best to form beliefs or take actions in response to a particular set of experiences and that many facts about this (going far beyond don’t be dutch booked). With the frequent additional belief that what is rational to do in response to various kind of experiences can be inferred by a priori considerations, e.g., think about all the ways that rule X might lead you wrong in certain possible situations so X can’t be rational.
When I’ve raised this issue in the past the response I’ve gotten from both Yudkowsky and Hanson is: “But of course we can try to be less wrong,” i.e., have less false beliefs. And of course that is true but that’s a very different notion than the notion of rationality used by rationality realists and misses the way that much of the rationality’s community’s talk about rationality isn’t about literally being less wrong but about classify rules for reaching beliefs into rational and irrational even when they don’t disagree in the actual world.
In particular, if all I’m doing is analyzing how to be less wrong I can’t criticize people who dogmatically believe things that happen to be true. After all, if god does exist, than dogmatically believing he does makes the people who do less wrong. Similarly the various critiques of human psychological dispositions as leading us to make wrong choices in some kinds of cases isn’t sufficient if those cases are rare and cases where it yields better results are common. However, those who are rationality realists suggest that there is some fact of the matter which makes these belief forming strategies irrational and thus appropriate to eschew and criticize. But, ultimately, aside from merely avoiding getting dutch booked, no rule for belief forming can assure it is less wrong than another in all possible worlds.