On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it’s trapped in some simulation). It’s an interesting question how intelligent can an AI get without having the need (or ability) to go meta.
Lightwave
Given that a parliament of humans (where they vote on values) is not accepted as a (final) solution to the interpersonal value / well-being comparison problem, why would a parliament be acceptable for intrapersonal comparisons?
It seems like people sort of turn into utility monsters—if people around you have a strong opinion on a certain topic, you better have a strong opinion too, or else it won’t carry as much “force”.
What about “decided on”?
With regards to the singularity, and given that we haven’t solved ‘morality’ yet, one might just value “human well-being” or “human flourishing” without referring to a long-term self concept. I.e. you just might care about a future ‘you’, even if that person is actually a different person. As a side effect you might also equally care about everyone else in to future too.
Right, but I want to use a closer to real life situation or example that reduces to the wason selection task (and people fail at it) and use that as the demonstration, so that people can see themselves fail in a real life situation, rather than in a logical puzzle. People already realize they might not be very good at generalized logic/math, I’m trying to demonstrate that the general logic applies to real life as well.
Well the thing is that people actually get this right in real life (e.g. with the rule ‘to drink you must be over 18’). I need something that occurs in real life and people fail at it.
I’m planning on doing a presentation on cognitive biases and/or behavioral economics (Kahneman et al) in front of a group of university students (20-30 people). I want to start with a short experiment / demonstration (or two) that will demonstrate to the students that they are, in fact, subject to some bias or failure in decision making. I’m looking for suggestion on what experiment I can perform within 30 minutes (can be longer if it’s an interesting and engaging task, e.g. a game), the important thing is that the thing being demonstrated has to be relevant to most people’s everyday lives. Any ideas?
I also want to mention that I can get assistants for the experiment if needed.
Edit: Has anyone at CFAR or at rationality minicamps done something similar? Who can I contact to inquire about this?
I don’t think this deserves its own top level discussion post and I suspect most of the downvotes are for this reason. Maybe use the open thread next time?
Some of them were general moral principles, but some of them were specific statements.
Trolley problems are also very specific, but people have great trouble with them. Maybe I should have said “non-familiar” rather than just “general”.
One interpretation is that many people don’t have strongly held or stable opinions on some moral questions and/or don’t care. Doesn’t sound very shocking to me.
Maybe morality is extremely context sensitive in many cases, thus polls on general moral questions are not all that useful.
When reading old LW posts and comments and seeing I’ve upvoted some comment, I find myself thinking “Wait, why have I upvoted this comment?”
So it still doesn’t show that Red is know-how in itself.
Talking about “red in itself” is a bit like talking about “the-number-1 in itself”. What does it mean? We can talk about the “redness sensation” that a person experiences, or “the experience of red”. From an anatomical point of view, experiencing red(ness) is a process that occurs in the brain. When you’re looking at something red (or imagining redness), certain neural pathways are constantly firing. No brain activity → no redness experience.
Let’s compare this to factual knowledge. How are facts stored in the brain? From what we understand about the brain, they’re likely encoded in neuronal/synaptic connections. You could in principle extract them by analyzing the brain. And where is the (knowledge of) red(ness) stored in the brain? Well there is no ‘redness’ stored in the brain, what is stored are (again in synaptic connections) instructions that activate the color-pathways of the visual cortex that produce the experience of red. See how the ‘knowledge of color’ is not quite like factual knowledge, but rather looks like an ability?
Well you need some input to the brain, even if it’s in a vat. Something has to either stimulate the retina or stimulate the relevant neurons further down the line. At least during some learning phase.
Or I guess you could assemble a brain-in-a-vat with memories built-in (e.g. the memory of seeing red). Thus the brain will have the architecture (and therefore the ability) to imagine red.
You can also know all relevant facts about physics but still not “know” how to ride a bicycle. “Knowing” what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural “program”. You can’t learn how to imagine red the same way you learn facts like 2+2=4 for the same reason you can’t learn how to ride a bike by learning physics. It’s a different type of “knowledge”, not sure if we should even call it that.
Edit (further explanation): To learn how to ride a bike you need to practice doing it, which implements a “neural program” that allows you to do it (via e.g. “muscle memory” and whatnot). Same for producing a redness sensation (imagining red), a.k.a “knowing what red looks like”.
that which can be destroyed by the utility function of humanity should be
Almost true by definition?
Doesn’t circuit design (and therefore computer processor design) require fairly large computational resources (for mathematical modelling)? Thus faster hardware now can be used to create even faster hardware.. faster.
I’m starting to wonder whether it might be useful to have a ‘Meta’ section, which is separate from Discussion (and Main) for meta threads of all kinds.
Shouldn’t you be applying this logic to your own motivations to be a rationalist as well? “Oh, so you’ve found this blog on the internet and now you know the real truth? Now you can think better than other people?” You can see how it can look from the outside. What would the implication for yourself be?