An aspiring rationalist who has been involved in the Columbus Rationality community since January 2016.
J Thomas Moros
A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.
Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans
A friend and I are investigating why the cryonics movement hasn’t been more successful and looking at what can be done to improve the situation. We have some ideas and have begun reaching out to people in the cryonics community. If you are interested in helping, message me. Right now it is mostly researching things about the existing cryonics organizations and coming up with ideas. In the future, there could be lots of other ways to contribute.
Even after reading your post, I don’t think I’m any closer to comprehending the illusionist view of reality. One of my good and most respected friends is an illusionist. I’d really like to understand his model of consciousness.
Illusionists often seem to be arguing against strawmen to me. (Notwithstanding the fact that some philosophers actually do argue for such “strawman” positions). Dennet’s argument against “mental paint” seems to be an example of this. Of course, I don’t think there is something in my mental space with the property of redness. Of course “according to the story your brain is telling, there is a stripe with a certain type of property.” I accept that the most likely explanation is that everything about consciousness is the result of computational processes (in the broadest sense that the brain is some kind of neural net doing computation, not in the sense that it is anything actually like the Von Neumann architecture computer that I am using to write this comment). For me, that in no way removes the hard problem of consciousness, it only sharpens it.
Let me attempt to explain why I am unable to understand what the strong illusionist position is even saying. Right now, I’m looking at the blue sky outside my window. As I fix my eyes on a specific point in the sky and focus my attention on the color, I have an experience of “blueness.” The sky itself doesn’t have the property of phenomenological blueness. It has properties that cause certain wavelengths of light to scatter and other wavelengths to pass through. Certain wavelengths of light are reaching my eyes. That is causing receptors in my eyes to activate which in turn causes a cascade of neurons to fire across my brain. My brain is doing computation which I have no mental access to and computing that I am currently seeing blue. There is nothing in my brain that has the property of “blue”. The closest thing is something analogous to how a certain pattern of bits in a computer has the “property” of being ASCII for “A”. Yet I experience that computation as the qualia of “blueness.” How can that be? How can any computation of any kind create, or lead to qualia of any kind? You can say that it is just a story my brain is telling me that “I am seeing blue.” I must not understand what is being claimed, because I agree with it and yet it doesn’t remove the problem at all. Why does that story have any phenomenology to it? I can make no sense of the claim that it is an illusion. If the claim is just that there is nothing involved but computation, I agree. But the claim seems to be that there are no qualia, there is no phenomenology. That my belief in them is like an optical illusion or misremembering something. I may be very confused about all the processes that lead to my experiencing the blue qualia. I may be mistaken about the content and nature of my phenomenological world. None of that in any way removes the fact that I have qualia.
Let me try to sharpen my point by comparing it to other mental computation. I just recalled my mother’s name. I have no mental access to the computation that “looks up” my mother’s name. Instead, I go from seemingly not having ready access to the name to having it. There is no qualia associated with this. If I “say the name in my head”, I can produce an “echo” of the qualia. But I don’t have to do this. I can simply know what her name is and know that I know it. That seems to be consistent with the model of me as a computation. That if I were a computation and retrieved some fact from memory, I wouldn’t have direct access to the process by which it was retrieved from memory, but I would suddenly have the information in “cache.” Why isn’t all thought and experience like that? I can imagine an existence where I knew I was currently receiving input from my eyes that were looking at the sky and perceiving a shade which we call blue without there being any qualia.
For me, the hard problem of consciousness is exactly the question, “How can a physical/computational process give rise to qualia or even the ‘illusion’ of qualia?” If you tell me that life is not a vital force but is instead very complex tiny machines which you cannot yet explain to me, I can accept that because, upon close examination, those are not different kinds of things. They are both material objects obeying physical laws. When we say qualia are instead complex computations that you cannot yet explain to me, I can’t quite accept that because even on close examination, computation and qualia seem to be fundamentally different kinds of things and there seems to be an uncrossable chasm between them.
I sometimes worry that there are genuine differences in people’s phenomenological experiences which are causing us to be unable to comprehend what others are talking about. Similar to how it was discovered that certain people don’t actually have inner monologues or how some people think in words while others think only in pictures.
As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.
Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. Note that it isn’t accurate to say it must be alive. It would be sufficient to freeze an individual worm and then spend extensive time and effort reading that information. Nevertheless, I can imagine that might be very difficult to do. According to wormbook.org, C. elegans has on the order of 7,000 synapses. I am not sure we know how to read the weight and threshold of a synapse. This strikes me as a task requiring significant technological development that isn’t in line with existing research programs. That is, most research is not attempting to develop the technology to read specific weights and thresholds. So it would require a significant well-funded effort focused specifically on it. I am not surprised this has not been achieved given reports of lack of funding. Furthermore, I am not surprised there is a lack of funding for this.
Simulating a worm should only require an accurate model of the behavior of the worm nervous system and a simulation environment. Given that all C. elegans have the same 302 neurons this seems like it should be feasible. Furthermore, the learning mechanism of individual neurons, operation of synapses, etc. should all be things researchers outside of the worm emulation efforts should be interested in studying. Were I wanting to advance the state of the art, I would focus on making an accurate simulation of a generic worm that was capable of learning. Then simulate it in an environment similar to its native environment and try to demonstrate that it eventually learned behavior matching real C. elegans including under conditions which C. elegans would learn. That is why I was very disappointed to learn that the “simulations are far from realistic because they are not capable of learning.” It seems to me this is where the research effort should focus and I would like to hear more about why this is challenging and hasn’t already been done.
I believe that worm uploading is not needed to make significant steps toward showing the feasibility of WBE. The kind of worm simulation I describe would be more than sufficient. At that point, reading the weights and thresholds of an individual worm becomes only an engineering problem that should be solvable given a sufficient investment or level of technological advancement.
This post describes a system of hand signals used for discussion moderation by the Columbus, Ohio rationality community. It has been used successfully for almost 2 years now. Applicability, advantages, disadvantages and variations are described.
I doubt the lack of 6-door cars has much to do with aesthetics. Doors and tight door seals are some of the more complex and expensive portions of the car body. Doors also pose challenges for crash safety as you have a large opening in the car body weakening the main body’s structural integrity in an accident. I suspect that the reason there are so few cars with 6 doors is the extra cost of manufacturing cars. That would lead to increased car costs. Most purchasers don’t value the extra convenience of the additional doors enough relative to the added price. Any company producing such a car would find a very small market which might make it not worth it to the manufacturer.
- 22 Mar 2021 6:16 UTC; 4 points) 's comment on Six-Door Cars by (
It’s unfortunate that we have this mess. But couldn’t this have been avoided by defaulting to minimal access? Per Mozilla (https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies), if a cookie’s domain isn’t set, it defaults to the domain of the site excluding subdomains. If instead, this defaulted to the full domain, wouldn’t that resolve the issue? The harm isn’t in allowing people to create cookies that span sites, but in doing so accidentally, correct? The only concern is then tracking cookies. For this, a list of TLDs which it would be invalid to specify as the domain would cover most cases. Situations like github.io are rare enough that there could simply be some additional DNS property they set which makes it invalid to have a cookie at that domain level.
Similarly, the secure and http-only properties ought to default to true.
I and some other rationalists have been thinking about cryonics a lot recently and how we might improve the strength of cryonics offerings and the rate of adoption. After some consideration, we came up with a couple suggestions for changes to the survey that we think would be helpful and interesting.
A question along the lines of “What impact do you believe money and attention put towards life extension or other technologies such as cryonics has on the world as a whole?” Answers:
Very positive
Positive
Neutral
Negative
Very Negative
The purpose of this question is to evaluate whether the community feels that resources put toward the benefit of individuals through life extension and cryonics has a positive or negative impact on the world. For example, people who expect to live longer may have more of a long term orientation, leading them to do more to improve the future.
Add to the question about being signed up for cryonics an option along the lines of “No, I would like to sign up but can’t due to opposition I would face from family or friends”. We hear this is one of the reasons people don’t sign up for cryonics. It would be great to get some numbers on this, and it doesn’t add an extra question, just an extra option for that question.
This is a review of the book Review: Freezing People is (Not) Easy by Bob Nelson. The book recounts his experiences as president of the Cryonics Society of California during which he cryopreserved and then attempted (and failed) to maintain the cryopreservation of a number of early cryonics patients.
Not going to sign up with some random site. If you are the author, post a copy that doesn’t require signup.
I think moving to frontpage might have broken it. I’ve put the link back on.
You should probably clarify that your solution is assuming the variant where the god’s head explodes when given an unanswerable question. If I understand correctly, you are also assuming that the god will act to prevent their head from exploding if possible. That doesn’t have to be the case. The god could be suicidal but perhaps not be able to die in any other way and so given the opportunity by you to have their head explode they will take it.
Additionally, I think it would be clearer if you could offer a final English sentence statement of the complete question that doesn’t involve self referential variables. The variables formation is helpful for seeing the structure, but confusing in other ways.
I feel strongly that link posts are an important feature that needs to be kept. There will always be significant and interesting content created on non-rationalist or mainstream sites that we will want to be able to link to and discuss on LessWrong. Additionally, while we might hope that all rationalist bloggers would be ok with cross-posting their content to LessWrong, there will likely always be those who don’t want to and yet we may want to include their posts in the discussion here.
Has there been any discussion or thought of modifying the posting of links to support a couple paragraphs of description? I often think that the title alone is not enough to motivate or describe a link. There are also situations where the connection of the link content to rationality may not be immediately obvious and a description here could help clarify the motivation in posting. Additionally, it could be used to point readers to the most valuable portions of sometimes long and meandering content.
I can parse your comment a couple of different ways, so I will discuss multiple interpretations but forgive me if I’ve misunderstood.
If we are talking about 3^^^3 dust specks experienced by that many different people, then it doesn’t change my intuition. My early exposure to the question included such unimaginably large numbers of people. I recognize scope insensitivity may be playing a role here, but I think there is more to it.
If we are talking about myself or some other individual experiencing 3^^^3 dust specks (or 3^^^3 people each experiencing 3^^^3 dust specks), then my intuition considers that a different situation. A single individual experiencing that many dust specks seems to amount to torture. Indeed, it may be worse than 50 years of regular torture because it may consume many more years to experience them all. I don’t think of that as “moral learning” because it doesn’t alter my position on the former case.
If I have to try to explain what is going on here in a systematic framework, I’d say the following. Splitting up harm among multiple people can be better than applying it to one person. For example, one person stubbing a toe on two different occasions is marginally worse than two people each stubbing one toe. Harms/moral offenses may separate into different classes such that no amount of a lower class can rise to match a higher class. For example, there may be no number of rodent murders that is morally worse than a single human murder. Duration of harm can outweigh intensity. For example, imagine mild electric shocks that are painful but don’t cause injury and furthermore that receiving one followed by another doesn’t make the second any more physically painful. Some slightly more intense shocks over a short time may be better than many more mild shocks over a long time. This comes in when weighing 50 years of torture vs 3^^^3 dusk specks experienced by one person though it is much harder to make the evaluation.
Those explanations feel a little like confabulations and rationalizations. However, they don’t seem to be any more so than a total utilitarianism or average utilitarianism explanation for some moral intuitions. They do, however, give some intuition why a simple utilitarian approach may not be the “obviously correct” moral framework.
If I failed to address the “aggregation argument,” please clarify what you are referring to.
There is a flaw in your argument. I’m going to try to be very precise here and spell out exactly what I agree with and disagree with in the hope that this leads to more fruitful discussion.
Your conclusions about scenarios 1, 2 and 3 are correct.
You state that Bostrom’s disjunction is missing a fourth case. The way you state (iv) is problematical because you phrase it in terms of a logical conclusion that “the principle of indifference leads us to believe that we are not in a simulation” which, as I’ll argue below, is incorrect. Your disjunct should properly be stated as something like (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, however we do this in a way so as to keep the number of simulated people well below the number of real people at any given moment. Stated that way, it is clear that Bostrom’s (iii) is meant to include that outcome. Bostrom’s argument is predicated only on the number of ancestral simulations, not whether they are run in parallel or sequentially or how much time they are run over. The reason Bostrom includes your (iv) in (iii) is because it doesn’t change the logic of the argument. Let me now explain why.
For the sake of argument let’s split (iii) into two cases (iii.a) and (iii.b). Let (iii.a) be all the futures in (iii) not covered by your (iv). For convenience, I’ll refer to this as “parallel” even though there are cases in (iv) where some simulations could be run in parallel. Then (iii.b) is equivalent to your (iv). For convenience, I’ll refer to this as serial even though again, it might not be strictly serial. I think we agree that if the future were guaranteed to be (iii.a), then we should bet we are in a simulation.
First, even if you were right about (iii.b), I don’t think it invalidates the argument. Essentially, you have just added another case similar to (ii), and it would still be the case that there are many more simulations that real people because of (iii.a) and we should bet that we are in a simulation.
Second, if the future is actually (iii.b) we should still bet we are in a simulation just as much as with (iii.a). At several points, you appeal to the principle of indifference, but you are vague on how this should be applied. Let me give a framework for thinking about this. What is happening here is that we are reasoning under indexical uncertainty. In each of your three scenarios and the simulation argument, there is uncertainty about which observer we are. Your statement that by the principle of indifference we should conclude something is actually saying what the SSA say which is that we should reason as if we are a randomly chosen observer. In Bostrom’s terms, you are uncertain which observer in your reference class you are. To make sure we are on the same page, let me go through your scenarios using this approach.
Scenario 1: You are not sure if you are in room X or room Y, the set of all people currently in room X and Y is your reference class. You reason as if you could be a randomly selected one so you have a 1000 to 1 chance of being in room X.
Scenario 2: You are told about the many people who have been in room Y in the past. However, they are in your past. You have no uncertainty about your temporal index relative to them, so you do not add them to your reference class and reason the same as in scenario 1. Bostrom’s book is weak here in that he doesn’t give you very good rules for selecting your reference class. I’m arguing that one of the criteria is that you have to be uncertain if you could be that person or not. So for example, you know you are not one of the many people not currently in room X or Y so you don’t include them in your reference class. Your reference class is the set of people you are unsure of your index relative to.
Scenario 3: This one is more tricky to reason correctly about. I think you are wrong when you say that the only relevant information here is diachronic information. You know you are now in room Z that contains 1 billion people who passed through room Y and 10,000 people who passed through room X. Your reference class is the people in room Z. You don’t have to reason about the temporal information or the fact that at any given moment there was only one person in room Y but 1,000 people in room X. The passing through room X or Y is now only a property of the people in room Z. This is equivalent to if I said you are blindfolded in a room with 1 billion people wearing red hats and 10,000 people wearing blue hats. Which hat color should you bet you are wearing? Reasoning with the people in room Z as your reference class you correctly give your self a 1 billion to 10,000 chance of having passed through room Y.
In (iii.b), you are uncertain whether you are in a simulation or reality. But if you are in a simulation you are also uncertain where you are chronologically relative to reality. Thus if a pair of simulations were run in sequence, you would be unsure if you were in the first or second simulation. You have both spatial and temporal uncertainty. You aren’t sure what the proper now is. Your reference class includes everyone in the historical reality as well as everyone in all the simulations. Given that as your reference class, you should reason that you are in a simulation (assuming many simulations are run). It doesn’t matter that those simulations are run serially, only that many of them are run. Your reference class isn’t limited to the current simulation and the current reality because you aren’t sure where you are chronologically relative to reality.
With regards to SIA or SSA. I can’t say that they make any difference to your position because the problem is that you have chosen the wrong reference class. In the original simulation argument, SIA vs. SSA makes little or no difference because presumably, the number of people living in historical reality is roughly equal to the number of people living in any given simulation. SIA only changes the conclusions when one outcome contains many more observers than the other. Here we treat each simulation as a different possible outcome, and so they agree.
I agree with your three premises. However, I would recommend using a different term than “humanism”.
Humanism is more than just the broad set of values you described. It is also a specific movement with more specific values. See for example the latest humanist manifesto. I agree with what you described as “humanism” but strongly reject the label humanist because I do not agree with the other baggage that goes with it. If possible, try to come up with a term that directly states the value you are describing. Perhaps something along the lines of “human flourishing as the standard of value”?
A couple typos:
The date you give is “(11/30)” it should be “(10/30)”
“smedium” should be “medium”
I find Jordon Peterson’s views fascinating and have a rationalist friend whose thinking has recently been greatly influenced by him. So much so that my friend recently went to a church service. My problem with his view is that it ignores the on the ground reality that many adherents believe their religion to be true in the sense of being a proper map of the territory. This is in direct contradiction to Peterson’s use of religion and truth. I warned my friend that this is what he would find in church. Sure enough, that is what he found, and he will not be returning.
I have completed the survey and upvoted everyone else on this thread