you could define “mother(x,y)” as “x gave birth to y”, and then “gave birth” as some more precise cluster of observations, which eventually need to be able to be identified from visual inputs
Kaarel:
if i should read this as talking about a translation of “x is the mother of y”, then imo this is a bad idea.
in particular, i think there is the following issue with this: saying which observations “x gave birth to y” corresponds to intuitively itself requires appealing to a bunch of other understanding. it’s like: sure, your understanding can be used to create visual anticipations, but it’s not true that any single sentence alone could be translated into visual anticipations — to get a typical visual anticipation, you need to rely on some larger segment of your understanding. a standard example here is “the speed of light in vacuum is 3*10^8 m/s” creating visual anticipations in some experimental setups, but being able to derive those visual anticipations depends on a lot of further facts about how to create a vacuum and properties of mirrors and interferometers and so on (and this is just for one particular setup — if we really mean to make a universally quantified statement, then getting the observation sentences can easily end up requiring basically all of our understanding). and it seems silly to think that all this crazy stuff was already there in what you meant when you said “the speed of light in vacuum is 3×108 m/s”. one concrete reason why you don’t want this sentence to just mean some crazy AND over observation sentences or whatever is because you could be wrong about how some interferometer works and then you’d want it to correspond to different observation sentences
that said, i think there is also something wrong with some very strong version of holism: it’s not really like our understanding is this unitary thing that only outputs visual anticipations using all the parts together, either — the real correspondence is somewhat more granular than that
TK:
On reflection, I think my “mother” example was pretty sloppy and perhaps confusing.
I agree that often quite a lot of our knowledge is needed to ground a statement in anticipations. And yeah actually it doesn’t always ground out in that, e.g. for parsing the meaning of counterfactuals. (See “Mixed Reference: The great reductionist project”.)
K:
i wouldn’t say a sentence is grounded in anticipations with a lot of our knowledge, because that makes it sound like in the above example, “the speed of light is 3×108 m/s” is somehow privileged compared to our understanding of mirrors and interferometers even though it’s just all used together to create anticipations; i’d instead maybe just say that a bunch of our knowledge together can create a visual anticipation
TK:
thx. i wanted to reply sth like “a true statement can either be tautological (e.g. math theorems) or empirical, and for it to be an empirical truth there needs to be some entanglement between your belief and reality, and entanglement happens through sensory anticipations. so i feel fine with saying that the sentence ‘the speed of light is 3×108 m/s’ still needs to be grounded in sensory anticipations”. but i notice that the way i would use “grounded” here is different from the way I did in my previous comment, so perhaps there are two different concepts that need to be disentangled.
K:
here’s one thing in this vicinity that i’m sympathetic to: we should have as a criterion on our words, concepts, sentences, thoughts, etc. that they play some role in determining our actions; if some mental element is somehow completely disconnected from our lives, then i’d be suspicious of it. (and things can be connected to action via creating visual anticipations, but also without doing that.)
that said, i think it can totally be good to be doing some thinking with no clear prior sense about how it could be connected to action (or prediction) — eg doing some crazy higher math can be good, imagining some crazy fictional worlds can be good, games various crazy artists and artistic communities are playing can be good, even crazy stuff religious groups are up to can be good. also, i think (thought-)actions in these crazy domains can themselves be actions one can reasonably be interested in supporting/determining, so this version of entanglement with action is really a very weak criterion
generally it is useful to be able to “run various crazy programs”, but given this, it seems obvious that not all variables in all useful programs are going to satisfy any such criterion of meaningfulness? like, they can in general just be some arbitrary crazy things (like, imagine some memory bit in my laptop or whatever) playing some arbitrary crazy role in some context, and this is fine
and similarly for language: we can have some words or sentences playing some useful role without satisfying any strict meaningfulness criterion (beyond maybe just having some relation to actions or anticipations which can be of basically arbitrary form)
a different point: in human thinking, the way “2+2=4” is related to visual anticipations is very similar to the way “the speed of light is 3×108 m/s” is related to visual anticipations
TK:
Thanks!
I agree that e.g. imagining fictional worlds like HPMoR can be useful.
I think I want to expand my notion of “tautological statements” to include statements like “In the HPMoR universe, X happens”. You can also pick any empirical truth “X” and turn it into a tautological one by saying “In our universe, X”. Though I agree it seems a bit weird.
Basically, mathematics tells you what’s true in all possible worlds, so from mathematics alone you never know in which world you may be in. So if you want to say something that’s true about your world specifically (but not across all possible worlds), you need some observations to pin down what world you’re in.
I think this distinction is what Eliezer means in his highly advanced epistemology sequence when he uses “logical pinpointing” and “physical pinpointing”.
You can also have a combination of the two. (I’d say as soon as some physical pinpointing is involved I’d call it an empirical fact.)
the imo most important thing in my messages above is the argument against [any criterion of meaningfulness which is like what you’re trying to state] being reasonable
in brief, because it’s just useful to be allowed to have arbitrary “variables” in “one’s mental circuits”
just like there’s no such meaningfulness criterion on a bit in your laptop’s memory
if you want to see from the outside the way the bit is “connected to the world”, one thing you could do is to say that the bit is 0 in worlds which are such-and-such and 1 in worlds which are such-and-such, or, if you have a sense of what the laptop is supposed to be doing, you could say in which worlds the bit “should be 0” and in which worlds the bit “should be 1″, but it’s not like anything like this crazy god’s eye view picture is (or even could explicitly be) present inside the laptop
our sentences and terms don’t have to have meanings “grounded in visual anticipations”, just like the bit in the laptop doesn’t
except perhaps in the very weak sense that it should be possible for a sentence to be involved in determining actions (or anticipations) in some potentially arbitrarily remote way
the following is mostly a side point: one problem with seeing from the inside what your bits (words, sentences) are doing (especially in the context of pushing the frontier of science, math, philosophy, tech, or generally doing anything you don’t know how to do yet, but actually also just basically all the time) is that you need to be open to using your bits in new ways; the context in which you are using your bits usually isn’t clear to you
btw, this is a sort of minor point but i’m stating it because i’m hoping it might contribute to pushing you out of a broader imo incorrect view: even when one is stating formal mathematical statements, one should be allowed to state sentences with no regard for whether they are tautologies/contradictions (that is, provable/disprovable) or not — ie, one should be allowed to state undecidable sentences, right? eg you should be allowed to state a proof that has the structure “if P, then blabla, so Q; but if not-P, then other-blabla, but then also Q; therefore, Q”, without having to pay any attention to whether P itself is tautological/contradictory or undecidable
so, if what you want to do with your criterion of meaningfulness involves banning saying sentences which are not “meaningful”, then even in formal math, you should consider non-tautological/contradictory sentences meaningful. (if you don’t want to ban the “meaningless” sentences, then idk what we’re even supposed to be doing with this notion of meaningfulness.)
TK:
Thx. I definitely agree one should be able to state all mathematical statements (including undecidable ones), and that for proofs you shouldn’t need to pay attention to whether a statement is undecidable or not. (I’m having sorta constructivist tendencies though, where “if P, then blabla, so Q; but if not-P, then other-blabla, but then also Q; therefore, Q” wouldn’t be a valid proof because we don’t assume the law of excluded middle.)
Ok yeah thx I think the way I previously used “meaningfully” was pretty confused. I guess I don’t really want to rule out any sentences people use.
I think sth is not meaningful if there’s no connection between a belief to your main belief pool. So “a puffy is a flippo” is perhaps not meaningful to you because those concepts don’t relate to anything else you know? (But that’s a different kind of meaningful from what errors people mostly make.)
K:
yea. tho then we could involve more sentences about puffies and flippos and start playing some game involving saying/thinking those sentences and then that could be fun/useful/whatever
so this version of entanglement with action is really a very weak criterion
Yeah, exactly, and hence the question: what are some counterexamples, ~concepts that clearly are not tied to action in any way? E.g., I could imagine metaphysical philosophizing to connect to action via contributing to a line of thinking that eventually produces a useful insight on how to do science or something. Is it about “being/remaining open to using it in new ways”?
I think I want to expand my notion of “tautological statements” to include statements like “In the HPMoR universe, X happens”. You can also pick any empirical truth “X” and turn it into a tautological one by saying “In our universe, X”. Though I agree it seems a bit weird.
I’m inclined to think that your generalized tautological statements are about something like “playing games according to ~rules in (~confined to) some mind-like system”. This is in contrast to (canonically) empirical statements that involve throwing a referential bridge across the boundary of the system.
I think sth is not meaningful if there’s no connection between a belief to your main belief pool. So “a puffy is a flippo” is perhaps not meaningful to you because those concepts don’t relate to anything else you know? (But that’s a different kind of meaningful from what errors people mostly make.)
K:
yea. tho then we could involve more sentences about puffies and flippos and start playing some game involving saying/thinking those sentences and then that could be fun/useful/whatever
[Thinking out loud.]
Intuitively, it does seem to me that if you start with a small set of elements isolated from the rest of your understanding, then they are meaningless, but then, as you grow this set of elements and add more relations/functions/rules/propositions with high implicative potential, this network becomes increasingly meaningful, even though it’s completely disconnected from the rest of understanding and our lives except for playing this domain/subnetwork-specific game.
Is it (/does it seem) meaningful just because I could throw a bridge between it and the rest of my understanding? Well, one could build a computer with this game installed only (+ ofc bare minimum to make it work: OS and stuff) and I would still be inclined to think it meaningful, although perhaps I would be imposing, and the meaningfulness would be co-created by the eye/mind of the beholder.
This leads to the question: What criteria do we want our (explicated) notion of meaningfulness to satisfy?
[For completeness, the concept of meaningfulness may need to be splintered or even eliminated (/factored out in a way that doesn’t leave anything clearly serving its role), though I think the latter rather unlikely.]
a chat with Towards_Keeperhood on what it takes for sentences/phrases/words to be meaningful
Towards_Keeperhood:
you could define “mother(x,y)” as “x gave birth to y”, and then “gave birth” as some more precise cluster of observations, which eventually need to be able to be identified from visual inputs
Kaarel:
if i should read this as talking about a translation of “x is the mother of y”, then imo this is a bad idea.
in particular, i think there is the following issue with this: saying which observations “x gave birth to y” corresponds to intuitively itself requires appealing to a bunch of other understanding. it’s like: sure, your understanding can be used to create visual anticipations, but it’s not true that any single sentence alone could be translated into visual anticipations — to get a typical visual anticipation, you need to rely on some larger segment of your understanding. a standard example here is “the speed of light in vacuum is 3*10^8 m/s” creating visual anticipations in some experimental setups, but being able to derive those visual anticipations depends on a lot of further facts about how to create a vacuum and properties of mirrors and interferometers and so on (and this is just for one particular setup — if we really mean to make a universally quantified statement, then getting the observation sentences can easily end up requiring basically all of our understanding). and it seems silly to think that all this crazy stuff was already there in what you meant when you said “the speed of light in vacuum is 3×108 m/s”. one concrete reason why you don’t want this sentence to just mean some crazy AND over observation sentences or whatever is because you could be wrong about how some interferometer works and then you’d want it to correspond to different observation sentences
this is roughly https://en.wikipedia.org/wiki/Confirmation_holism as a counter to https://en.wikipedia.org/wiki/Verificationism
that said, i think there is also something wrong with some very strong version of holism: it’s not really like our understanding is this unitary thing that only outputs visual anticipations using all the parts together, either — the real correspondence is somewhat more granular than that
TK:
On reflection, I think my “mother” example was pretty sloppy and perhaps confusing. I agree that often quite a lot of our knowledge is needed to ground a statement in anticipations. And yeah actually it doesn’t always ground out in that, e.g. for parsing the meaning of counterfactuals. (See “Mixed Reference: The great reductionist project”.)
K:
i wouldn’t say a sentence is grounded in anticipations with a lot of our knowledge, because that makes it sound like in the above example, “the speed of light is 3×108 m/s” is somehow privileged compared to our understanding of mirrors and interferometers even though it’s just all used together to create anticipations; i’d instead maybe just say that a bunch of our knowledge together can create a visual anticipation
TK:
thx. i wanted to reply sth like “a true statement can either be tautological (e.g. math theorems) or empirical, and for it to be an empirical truth there needs to be some entanglement between your belief and reality, and entanglement happens through sensory anticipations. so i feel fine with saying that the sentence ‘the speed of light is 3×108 m/s’ still needs to be grounded in sensory anticipations”. but i notice that the way i would use “grounded” here is different from the way I did in my previous comment, so perhaps there are two different concepts that need to be disentangled.
K:
here’s one thing in this vicinity that i’m sympathetic to: we should have as a criterion on our words, concepts, sentences, thoughts, etc. that they play some role in determining our actions; if some mental element is somehow completely disconnected from our lives, then i’d be suspicious of it. (and things can be connected to action via creating visual anticipations, but also without doing that.)
that said, i think it can totally be good to be doing some thinking with no clear prior sense about how it could be connected to action (or prediction) — eg doing some crazy higher math can be good, imagining some crazy fictional worlds can be good, games various crazy artists and artistic communities are playing can be good, even crazy stuff religious groups are up to can be good. also, i think (thought-)actions in these crazy domains can themselves be actions one can reasonably be interested in supporting/determining, so this version of entanglement with action is really a very weak criterion
generally it is useful to be able to “run various crazy programs”, but given this, it seems obvious that not all variables in all useful programs are going to satisfy any such criterion of meaningfulness? like, they can in general just be some arbitrary crazy things (like, imagine some memory bit in my laptop or whatever) playing some arbitrary crazy role in some context, and this is fine
and similarly for language: we can have some words or sentences playing some useful role without satisfying any strict meaningfulness criterion (beyond maybe just having some relation to actions or anticipations which can be of basically arbitrary form)
a different point: in human thinking, the way “2+2=4” is related to visual anticipations is very similar to the way “the speed of light is 3×108 m/s” is related to visual anticipations
TK:
Thanks!
I agree that e.g. imagining fictional worlds like HPMoR can be useful.
I think I want to expand my notion of “tautological statements” to include statements like “In the HPMoR universe, X happens”. You can also pick any empirical truth “X” and turn it into a tautological one by saying “In our universe, X”. Though I agree it seems a bit weird.
Basically, mathematics tells you what’s true in all possible worlds, so from mathematics alone you never know in which world you may be in. So if you want to say something that’s true about your world specifically (but not across all possible worlds), you need some observations to pin down what world you’re in.
I think this distinction is what Eliezer means in his highly advanced epistemology sequence when he uses “logical pinpointing” and “physical pinpointing”.
You can also have a combination of the two. (I’d say as soon as some physical pinpointing is involved I’d call it an empirical fact.)
Commented about that. (I actually changed my model slightly): https://www.lesswrong.com/posts/bTsiPnFndZeqTnWpu/mixed-reference-the-great-reductionist-project?commentId=HuE78qSkZJ9MxBC8p
K:
the imo most important thing in my messages above is the argument against [any criterion of meaningfulness which is like what you’re trying to state] being reasonable
in brief, because it’s just useful to be allowed to have arbitrary “variables” in “one’s mental circuits”
just like there’s no such meaningfulness criterion on a bit in your laptop’s memory
if you want to see from the outside the way the bit is “connected to the world”, one thing you could do is to say that the bit is 0 in worlds which are such-and-such and 1 in worlds which are such-and-such, or, if you have a sense of what the laptop is supposed to be doing, you could say in which worlds the bit “should be 0” and in which worlds the bit “should be 1″, but it’s not like anything like this crazy god’s eye view picture is (or even could explicitly be) present inside the laptop
our sentences and terms don’t have to have meanings “grounded in visual anticipations”, just like the bit in the laptop doesn’t
except perhaps in the very weak sense that it should be possible for a sentence to be involved in determining actions (or anticipations) in some potentially arbitrarily remote way
the following is mostly a side point: one problem with seeing from the inside what your bits (words, sentences) are doing (especially in the context of pushing the frontier of science, math, philosophy, tech, or generally doing anything you don’t know how to do yet, but actually also just basically all the time) is that you need to be open to using your bits in new ways; the context in which you are using your bits usually isn’t clear to you
btw, this is a sort of minor point but i’m stating it because i’m hoping it might contribute to pushing you out of a broader imo incorrect view: even when one is stating formal mathematical statements, one should be allowed to state sentences with no regard for whether they are tautologies/contradictions (that is, provable/disprovable) or not — ie, one should be allowed to state undecidable sentences, right? eg you should be allowed to state a proof that has the structure “if P, then blabla, so Q; but if not-P, then other-blabla, but then also Q; therefore, Q”, without having to pay any attention to whether P itself is tautological/contradictory or undecidable
so, if what you want to do with your criterion of meaningfulness involves banning saying sentences which are not “meaningful”, then even in formal math, you should consider non-tautological/contradictory sentences meaningful. (if you don’t want to ban the “meaningless” sentences, then idk what we’re even supposed to be doing with this notion of meaningfulness.)
TK:
Thx. I definitely agree one should be able to state all mathematical statements (including undecidable ones), and that for proofs you shouldn’t need to pay attention to whether a statement is undecidable or not. (I’m having sorta constructivist tendencies though, where “if P, then blabla, so Q; but if not-P, then other-blabla, but then also Q; therefore, Q” wouldn’t be a valid proof because we don’t assume the law of excluded middle.)
Ok yeah thx I think the way I previously used “meaningfully” was pretty confused. I guess I don’t really want to rule out any sentences people use.
I think sth is not meaningful if there’s no connection between a belief to your main belief pool. So “a puffy is a flippo” is perhaps not meaningful to you because those concepts don’t relate to anything else you know? (But that’s a different kind of meaningful from what errors people mostly make.)
K:
yea. tho then we could involve more sentences about puffies and flippos and start playing some game involving saying/thinking those sentences and then that could be fun/useful/whatever
TK:
maybe. idk.
Yeah, exactly, and hence the question: what are some counterexamples, ~concepts that clearly are not tied to action in any way? E.g., I could imagine metaphysical philosophizing to connect to action via contributing to a line of thinking that eventually produces a useful insight on how to do science or something. Is it about “being/remaining open to using it in new ways”?
I’m inclined to think that your generalized tautological statements are about something like “playing games according to ~rules in (~confined to) some mind-like system”. This is in contrast to (canonically) empirical statements that involve throwing a referential bridge across the boundary of the system.
[Thinking out loud.]
Intuitively, it does seem to me that if you start with a small set of elements isolated from the rest of your understanding, then they are meaningless, but then, as you grow this set of elements and add more relations/functions/rules/propositions with high implicative potential, this network becomes increasingly meaningful, even though it’s completely disconnected from the rest of understanding and our lives except for playing this domain/subnetwork-specific game.
Is it (/does it seem) meaningful just because I could throw a bridge between it and the rest of my understanding? Well, one could build a computer with this game installed only (+ ofc bare minimum to make it work: OS and stuff) and I would still be inclined to think it meaningful, although perhaps I would be imposing, and the meaningfulness would be co-created by the eye/mind of the beholder.
This leads to the question: What criteria do we want our (explicated) notion of meaningfulness to satisfy?
[For completeness, the concept of meaningfulness may need to be splintered or even eliminated (/factored out in a way that doesn’t leave anything clearly serving its role), though I think the latter rather unlikely.]