I’m curious about the 3rd argument. I’m curious about why you think it is likely that significant players will notice the contribution of Less Wrong?
Is HEPA the highest standard or are there higher quality filters that could remove viruses?
Are there any plans to use this for anything else in the future?
The author makes some good points, but:
I think they worry too much about the submissiveness of the relationship. I think it’s a much more common desire than people acknowledge. Not just in terms of sexuality, but in terms of a desire for a father figure or a great leader to tell people what to do. So it’s a common desire not just for particular moments, but in how people live their whole lives
I don’t agree with the point about relationship being invalid because you don’t have to work for it. I agree that this would be bad in a romantic relationship because it’d hamper your personal development, but I really don’t think that getting a cat instead of a dog will have a large effect. In fact, the safety provided by the unconditionality of a pets love may provide someone with the safety to take more risks in their relationships in the real world
I don’t think dogs need meaning in quite the same way as humans. They acknowledge that they’ve personified them to an extent and try to show their argument holds anyway. However, I don’t think they’ve entirely avoided the personification trap. I don’t deny that dogs may have instincts such as hunting or herding that are unfulfilled in modern life. This are instincts and we should be concerned with them being unfulfilled, but I don’t think we should equate them with a life purpose.
In any case, dogs as pets probably increase the empathy for animals significantly, so we should encourage more pets, not less.
I would be keen run a webinar on Logical Counterfactuals
Before it was even clear it’d be this big a threat, I wrote: EA Should Wargame Coronavirus. Now I think there’s an even stronger argument for it.
I think that this offers us valuable experience in dealing with one plausible source of existential risk. We don’t want AI Safety people distracted from AI Safety, but at the same time I think the community will learn a lot by embarking on this project together.
I guess I should have been more precise. Imagine a game where we can see all the information, but some characters inside only have access to limited info.
“The outside perspective is outside but it is not observer-independent”
Sure, but it’s not subject to the world-internal observer effects
Maybe this will help. Consider the characters in a video game. We are an external observer as we can see what is happening in the game, but they can’t see us. The point isn’t that we can see ourselves from the outside, but that we can imagine what it would be like to be seen from the outside.
“Our ability to imagine data about us being received by some perspective, depends on placing that perspective relative to our own”—Yes, there are limits to what we can say about the outside perspective as we can’t reference it directly. We can only discuss it by analogy.
Maybe I’ll write a post on this sometime.
“An account would have to be given of how we, as humans embedded in the universe, can speak as any kind of “external observer”″ - If we construct a model that doesn’t contain us, then we are an external observer of that model. We can then be analogy posit the existence of an agent that exists in that relation to us.
Re the analogy: We can’t have an entity that is both internally referenceable and internally unreferenceable. However we can have an external reference to an unreferenceable entity. Okay, maybe the analogy wasn’t quite as direct as I was thinking.
I’d define it as the argument that nothing non-material exists (except possibility logic)
I can follow your argument, but could you clarify what you mean by Sense and Reference?
Un-referenceable objective reality goes rather beyond un-knowable objective reality. The second doesn’t collapse into absurdity, while the first does (note that “un-referenceable objective reality” is a reference!).
We can construct a model were we (the external observers) can reference things that not observer in the model (the internal observers) can reference. Here’s an analogy—we can’t prove an unprovable theorem, but we might be able to prove a theorem unprovable.
Incompatible with the sort of physicalism that thinks it isn’t meaningful to talk about “seeing” (e.g. consciousness) independent of a physical definition.
I’m not familiar with that strain of thought, but I can posit why some people might find that compelling
“Define an interpretation scheme” is incredibly vague
Yeah, as I said, this is just a sketch. There’s a lot more that would need to be said it order to actually do this
Thanks, I thought this was useful, especially dividing it into 3 categories instead of two
Interesting thought. I wouldn’t go so far as to say it’s only definable within a formal system. But without formal definitions, it’s going to be kind of fuzzy/dependent on exact interpretation.
Physics can’t say what an epistemic component is
Insofar as the epistemic component consists of logic, physics can’t say what that logic is ontologically. On the other hand, it can describe how brain states are linked to physical states, which should be sufficient to explain materialistic-observations.
So as a justification for physics, it’s circular
Circularity is inevitable (I like the arguments in Where Recursive Justification Hits Bottom), so this isn’t as problematic as it seems.
That said, I agree that starting with subjective experience as our initial foundations is in one sense more empirical than starting with the external world as we can derive the external world’s existence from patterns in subjective experience.
If someone said “actually, there’s no such thing as (c), there’s just (a) and (b)”, then that’s going to be hard to argue for, epistemically/normatively, since there is a denial of the existence of epistemology.
Physics can explain the epistemic component in your brain—it just can’t explain the experience of believing or cognition in general.
I am not really importantly distinguishing qualia-observations from “the data my cognitive process is trying to explain” here. It seems like even an account that somehow doesn’t believe in qualia still needs to have data that it explains, hence running into similar issues.
The data to be explained are the experiences—say of seeing red or feeling pain. If you take that data to be the red brain process, that can be explained purely materialistically. The red brain process only needs a materialistic observer—ie. some kind of central processing unit—what’s wrong with this? It’s only qualia that needs the observer to have a non-materialistic component.
Happy to see someone else defending non-materialism since I see it as underrated. Some thoughts:
Nah, it can’t account for what an “observation” is so can’t really explain observations
This is really the heart of the issue. Is an observation qualia or some purely material process in our brain?
I should adopt the explanation that best explains my observation
Seems like a distraction? If the observations are materialist than materialism can explain the materialist-myness; if they are qualatic that we need qualia to define the qualiatic-myness. Merely knowing qualia have a property of my-ness doesn’t tell us which type. And it would seem unusual to say, know that myness is qualiatic without first knowing observations are qualiatic, since we can’t directly experience our myness, only our observations.
It has to do some ontological reshuffling around what “observations” are that, I think, undermines the case for believing in physics in the first place, which is that it explains my observations
Why does it undermine physics?
I think it makes more sense to think of mental things as existing subjectively (i.e. if they belong to you) and physical things as existing objectively. I definitely think that dualism is making a mistake in thinking of objectively-existing mental things
This relates quite closely to my post on Relabellings vs. External References. If mental things are just a relabelling of materialism, they don’t actually add anything be being present in the model. In order to actually change the system, they need to refer to external entities, in which case mental things aren’t really subjective any more.
Why’d you pick Kermit the frog?