The thing about slop effects is that my updates (attempted to be described e.g. here https://www.lesswrong.com/s/gEvTvhr8hNRrdHC62 ) makes huge fractions of LessWrong look like slop to me. Some of the increase in vagueposting is basically lazy probing for whether rationalists will get the problem if framed in different ways than the original longform.
Yeah, I think those were some of your last good posts / first bad posts.
rationalists will get the problem if framed in different ways than the original longform.
Do you honestly think that rationalists will suddenly get your point if you say
I don’t think RL or other AI-centered agency constructions will ever become very agentic.
with no explanation or argument at all, or even a link to your sparse lognormals sequence?
Or what about
Ayn Rand’s book “The Fountainhead” is an accidental deconstruction of patriarchy that shows how it is fractally terrible. […] The details are in the book. I’m mainly writing the OP to inform clueless progressives who might’ve dismissed Ayn Rand for being a right-wing misogynist that despite this they might still find her book insightful.
This seems entirely unrelated to any of the points you made in sparse lognormals (that I can remember!), but I consider this too part of your recent vagueposting habit.
I really liked your past posts and comments, I’m not saying this to be mean, but I think you’ve just gotten lazier (and more “cranky”) in your commenting & posting, and do not believe you are genuinely ” probing for whether rationalists will get the problem if framed in different ways than the original longform.”
If you wanted to actually do that, you would at least link to the relevant sections of the relevant posts, or better, re-explain the arguments of those sections in the context of the conversation.
For me though, what would get me much more on-board with your thoughts are actual examples of you using these ideas to model things nobody else can model (mathematically!) in as broad a spectrum of fields as you claim. That, or a much more compact & streamlined argument.
For me though, what would get me much more on-board with your thoughts are actual examples of you using these ideas to model things nobody else can model (mathematically!) in as broad a spectrum of fields as you claim. That, or a much more compact & streamlined argument.
I think this is the crux. To me after understanding these ideas, it’s retroactively obvious that they are modelling all sorts of phenomena. My best guess is that the reason you don’t see it is that you don’t see the phenomena that are failing to be modelled by conventional methods (or at least don’t understand how those phenomena related to the birds-eye perspective), so you don’t realize what new thing is missing. And I can’t easily cure this kind of cluelessness with examples, because my theories aren’t necessary if you just consider a single very narrow and homogenous phenomenon as then you can just make a special-built theory for that.
This may well be true (though I think not), but what is your argument about not even linking to your original posts? Or how often you don’t explain yourself even in completely unrelated subjects? My contention is that you are not lazily trying on a variety of different reframings of your original arguments or conclusions to see what sticks, and are instead just lazy.
This may well be true (though I think not), but what is your argument about not even linking to your original posts?
I don’t know of anyone who seems to have understood the original posts, so I kinda doubt people can understand the point of them. Plus often what I’m writing about is a couple of steps removed from the original posts.
Or how often you don’t explain yourself even in completely unrelated subjects?
Part of the probing is to see which of the claims I make will seem obviously true and which of them will just seem senseless.
Ok, I will first note that this is different from what you said previously. Previously, you said “probing for whether rationalists will get the problem if framed in different ways than the original longform” but now you say “I’m trying to probe the obviousness of the claims.”. It’s good to note when such switches occur.
Second, you should stop making lazy posts with no arguments regardless of the reasons. You can get just as much, and probably much more information through making good posts, there is not a tradeoff here. In fact, if you try to explain why you think something, you will find that others will try to explain why they don’t much more often than if you don’t, and they will be pretty specific (compared to an aggregated up/down vote) about what they disagree with.
But my true objection is I just don’t like bad posts.
So it sounds like your general theory has no alpha over narrow theories. What, then, makes it any good? Is it just that its broad enough to badly model many systems? Then it sounds useful in every case where we can’t make any formal predictions yet, and you should give those examples!
Edit: and also back up those examples by actually making the particular model, and demonstrate why such models are so useful through means decorrelated with your original argument.
This is the laziness I’m talking about! Do you really not understand why it would be to your theory-of-everything’s credit to have some, any, any at all, you know, actual use?
How suspicious it is that when I ask for explicit concrete examples, you explain that your theory is not really about particular examples, despite that if your vague-posting is indeed applying your theory of everything to particular examples, we can derive the existence of circumstances you believe your theory can well model?
And that excuse being that its good at deciding what to make good theories about, you cannot think of one reason why I’d like to know what theories you think would be smart to make using this framework.
I can think of reasons why you’d like to know what theories would be smart to make using this framework, e.g. so you can make those theories instead of bothering to learn the framework. However, that’s not a reason it would be good for me to share it with you, since I think that’d just distract you from the point of my theory.
Thing is just from the conclusions it won’t be obvious that the meta-level theory is better. The improvement can primarily be understood in the context of the virtues of the meta-level theory.
More specifically, my position is anti-reductionist, and rationalist-empiricist-reductionists dismiss anti-reductionists as cranks. As long as you are trying to model whether I am that and then dismiss me if you find I am, it is a waste of time to try to communicate my position to you.
I am not dismissing you because of your anti-reductionism! Where did I say that? Indeed, I have been known to praise some “anti-reductionist” theories—fields even!
I’m dismissing you because you can’t give me examples of where your theory has been concretely useful!
You praise someone who wants to do agent-based models, but agent-based models are a reductionistic approach to the field of complexity science, so this sure seems to prove my point. (I mean, approximately all of the non-reductionistic approaches to the field of complexity science are bad too.)
I don’t care who calls themselves what, complexity science calls itself anti-reductionist, I don’t dismiss them. Therefore I can’t dismiss people just because they call themselves anti-reductionist, I must use their actual arguments to evaluate their positions.
I will also say that pleading to the community’s intrinsic bias and claiming I’ve made arguments I haven’t or have positions I don’t is not doing much to make me think you less a crank.
I don’t think you’re using the actual arguments I presented in the LDSL series to evaluate my position.
I remember reading LDSL and not buying the arguments! At the time, I deeply respected you and your thinking, and thought “oh well I’m not buying these arguments, but surely if they’re as useful as they say, tailcalled will apply them to various circumstances and that will be pie on my face, and in that circumstance I should try to figure out why I was mistaken”. But then you didn’t, and you started vague-posting constantly, and now we’re here and you’re giving excuse after excuse of why its actually impossible for you to tell me any concrete application of your theory, and accusing me of anti-reductionist prejudice.
I admit, I do have an anti-reductionist prejudice, its called a prior, but its not absolute, and its not enough to stop listening to someone. I really, really, really don’t think I’m outright dismissing you because you’re anti-reductionist. I was totally willing to listen to you, even when you were making such arguments, and end up being wrong!
I even have the receipts to prove it! Until like just under a month ago, I was still emailed & lesswrong notified every time you made a post!
(they are unread, because I check LessWrong more commonly than my email)
I cannot stress enough, the reason why I’m dismissing you is because you stopped making arguments and started constantly vague-posting.
I’m dismissing you because you can’t give me examples of where your theory has been concretely useful!
If you don’t have any puzzles within Economics/Sociology/Biology/Evolution/Psychology/AI/Ecology where it would be useful with a more holistic theory, then it’s not clear why I should talk to you.
Wouldn’t it be more impressive if I could point you to a solution to a puzzle you’ve been stuck on than if I present my own puzzle and give you the solution to that?
It would, but you didn’t ask for such a thing. Are you asking for such a thing now? If so, here is one in AI, which is on everyone’s minds: How do we interpret the inner-workings of neural networks.
I expect though, that you will say that your theory isn’t applicable here for whatever reason. Therefore it would be helpful if you gave me an example of what sort of puzzle your theory is applicable to.
“How do we interpret the inner-workings of neural networks.” is not a puzzle unless you get more concrete an application of it. For instance an input/output pair which you find surprising and want an interpretation for, or at least some general reason you want to interpret it.
The LDSL series provides quite a few everyday examples, but for some reason you aren’t satisfied with those. Difficult examples require that you’re good at something, so I might not be able to find an example for you.
Here you ask a lot of questions, approximately each of the form “why do ‘people’ think <thing-that-some-people-think-but-certainly-not-all”. To list a few,
Why are people so insistent about outliers?
Seems to have a good answer. Sometimes they’re informative!
Why isn’t factor analysis considered the main research tool?
Seems also to have a good answer, it is easy to fool yourself if you do it improperly.
How can probability theory model bag-like dynamics?
I would sure love a new closed-form way of modeling bag-like dynamics, as you describe them, if you have them! I don’t think you give one though, but surely if you mention it, you must have the answer somewhere!
Perception is logaritmic; doesn’t this by default solve a lot of problems?
Seems less a question than a claim? And I don’t think we need special math to solve this one.
None of these seem like concrete applications of your theory, but that’s fine. It was an intro post, you will surely explain all these later on, as worked examples at some point, right?
I proposed that life cannot be understood through statistics, but rather requires more careful study of individual cases.
Wait, I don’t think your previous post was about that? I certainly use statistics when doing performance optimization! In particular, I profile my code and look at which function calls are taking the bulk of the time, then optimize or decrease the number of calls to those.
Hey look a concrete example!
Let’s take a epidemic as an example. There’s an endless number of germs of different species spreading around. Most of them don’t make much difference for us. But occasionally, one of them gains the capacity to spread more rapidly from person to person, which leads to an epidemic. Here, the core factor driving the spread of the disease is the multiplicative interaction between infected and uninfected people, and the key change that changes it from negligible to important is the change in the power of this interaction.
One it has infected someone, it can have further downstream effects, in that it makes them sick and maybe even kills them. (And whether it kills them or not, this sickness is going to have further downstream effects in e.g. interrupting their work.) But these downstream effects are critically different from the epidemic itself, in that they cannot fuel the infection further. Rather, they are directly dependent on the magnitude of people infected.
… well more like a motivating example. I’m sure at some point you build models and compare your model to those the epidemiologists have built… right?
Your solution here to the problem you outline seems like a cop-out to me, and of course (other than the tank/dust example, which is by no means an example in the sense we’re talking about here), there are no examples.
Here you give the example of elo, but you don’t really provide any alternatives, and you mostly mention that picking bases when taking logarithms may be hard, so also doesn’t seem like an example.
Therefore, if it seemed like I didn’t read your sequence before (which I did! Just a while ago), I have certainly at least skimmed it now, and can say with relative confidence that no, you don’t in fact give concrete examples of circumstances where your theory performs better than the competition even once. At most you give some statistical arguments for why in some circumstances you may want to use various statistical tools. But this is by no means some theory of everything or even really much a steel-man for anti-reductionism.
You don’t even come back to the problems you originally listed! Where’s the promised theory of autism? Where’s the closed form model of bag-like dynamics? Where’s the steel-man of psychoanalysis, or the take-down of local validity and coherence, or the explanation of why commonsense reasoning avoids the principle of explosion?
This is the behavior of a lazy crack-pot, who doesn’t want to admit the fact that nobody is listening to them anymore because they’re just wrong. It is not the case that I’m just not good at anything enough to understand your oh-so-complex examples. You just don’t want to provide examples and would rather lie and say you’ve provided examples in the past, relying on your (false) assumption that I haven’t read what you’ve written, than actually list anything concrete.
I do remember liking this post! It was good. However, the conclusions here do not seem dependent on your overall conclusions.
This post has the table example. That’s probably the most important of all the examples.
Wait, I don’t think your previous post was about that? I certainly use statistics when doing performance optimization! In particular, I profile my code and look at which function calls are taking the bulk of the time, then optimize or decrease the number of calls to those.
That’s accounting, not statistics.
… well more like a motivating example. I’m sure at some point you build models and compare your model to those the epidemiologists have built… right?
AFAIK epidemiologists usually measure particular diseases and focus their models on those, whereas LDSL would more be across all species of germs.
Therefore, if it seemed like I didn’t read your sequence before (which I did! Just a while ago), I have certainly at least skimmed it now, and can say with relative confidence that no, you don’t in fact give concrete examples of circumstances where your theory performs better than the competition even once. At most you give some statistical arguments for why in some circumstances you may want to use various statistical tools. But this is by no means some theory of everything or even really much a steel-man for anti-reductionism.
There is basically no competition. You just keep on treating it like the narrow domain-specific models count as competition when they really don’t because they focus on something different than mine.
AFAIK epidemiologists usually measure particular diseases and focus their models on those, whereas LDSL would more be across all species of germs.
I would honestly be interested in any concrete model you build based on this. You don’t necessarily have to compare it against some other field’s existing model, though it does help for credibility’s sake. But I would like to at least be able to compare the model you make against data.
I’m also not sure this is true about epidemiologists, and if it is I’d guess its true to the extent that they have like 4 different parameterizations of different types of diseases (likely having to do with various different sorts of vectors of spread), then they fit one of those 4 different parameterizations to the measured (or inferred) characteristics of a particular disease.
The most central aspect of my model is to explain why it’s generally not relevant to fit quantitative models to data.
I’m also not sure this is true about epidemiologists, and if it is I’d guess its true to the extent that they have like 4 different parameterizations of different types of diseases (likely having to do with various different sorts of vectors of spread), then they fit one of those 4 different parameterizations to the measured (or inferred) characteristics of a particular disease.
Each disease (and even different strands of the same disease and different environmental conditions for the same strand) has its own parameters, but they don’t fit a model that contains all the parameters of all diseases at once, they just focus on one disease at a time.
“How do we interpret the inner-workings of neural networks.” is not a puzzle unless you get more concrete an application of it. For instance an input/output pair which you find surprising and want an interpretation for, or at least some general reason you want to interpret it.
Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow “input/output pair” problems. Yet now you say
You just keep on treating it like the narrow domain-specific models count as competition when they really don’t because they focus on something different than mine.
If I treat your theory this way, it is only because you did, 3 hours ago, when you believed I hadn’t read your post or would even give you the time of the day. You claimed “How do we interpret the inner-workings of neural networks.” was “not a puzzle unless you get [a?] more concrete application of it”, yet the examples you list in your first post are no more vague, and often quite a bit more vague than “how do you interpret neural networks?” or “why are adversarial examples so easy to find?” For example, the question “Why are people so insistent about outliers?” or “Why isn’t factor analysis considered the main research tool?”
There is basically no competition.
For… what exactly? For theories of everything? Oh I assure you, there is quite a bit of competition there. For statistical modeling toolkits? Ditto. What exactly do you think the unique niche you are trying to fill is? You must be arguing against someone, and indeed you often do argue against many.
Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow “input/output pair” problems. Yet now you say
The relevance of zooming in on particular input/output problems is part of my model.
If I treat your theory this way, it is only because you did, 3 hours ago, when you believed I hadn’t read your post or would even give you the time of the day. You claimed “How do we interpret the inner-workings of neural networks.” was “not a puzzle unless you get [a?] more concrete application of it”, yet the examples you list in your first post are no more vague, and often quite a bit more vague than “how do you interpret neural networks?” or “why are adversarial examples so easy to find?” For example, the question “Why are people so insistent about outliers?” or “Why isn’t factor analysis considered the main research tool?”
“Why are adversarial eamples so easy to find?” is a problem that is easily solvable without my model. You can’t solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.
“Why are people so insistent about outliers?” is not vague at all! It’s a pretty specific phenomenon that one person mentions a general theory and then another person says it can’t be true because of their uncle or whatever. The phrasing in the heading might be vague because headings are brief, but I go into more detail about it in the post, even linking to a person who frequently struggles with that exact dynamic.
As an aside, you seem to be trying to probe me for inconsistencies and contradictions, presumably because you’ve written me off as a crank. But I don’t respect you and I’m not trying to come off as credible to you (really I’m slightly trying to come off as non-credible to you because your level of competence is too low for this theory to be relevant/good for you). And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven’t just abandoned the conversation.
For… what exactly? For theories of everything? Oh I assure you, there is quite a bit of competition there. For statistical modeling toolkits? Ditto. What exactly do you think the unique niche you are trying to fill is? You must be arguing against someone, and indeed you often do argue against many.
Theories of everything that explain why intelligence can’t model everything and you need other abilities.
And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven’t just abandoned the conversation.
I liked your old posts and your old research and your old ideas. I still have some hope you can reflect on the points you’ve made here, and your arguments against my probes, and feel a twinge of doubt, or motivation, pull on that a little, and end up with a worldview that makes predictions, lets you have & make genuine arguments, and gives you novel ideas.
If you were always lazy, I wouldn’t be having this conversation, but once you were not.
No it doesn’t. I obviously understood my old posts (and still do—the posts make sense if I imagine ignoring LDSL). So I’m capable of understanding whether I’ve found something that reveals problems in them. It’s possible I’m communicating LDSL poorly, or that you are too ignorant to understand it, or that I’m overestimating how broadly it applies, but those are far more realistic than that I’ve become a pure crank. If you still prefer my old posts to my new posts, then I must know something relevant you don’t know.
“Why are adversarial eamples so easy to find?” is a problem that is easily solvable without my model. You can’t solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.
The thing about slop effects is that my updates (attempted to be described e.g. here https://www.lesswrong.com/s/gEvTvhr8hNRrdHC62 ) makes huge fractions of LessWrong look like slop to me. Some of the increase in vagueposting is basically lazy probing for whether rationalists will get the problem if framed in different ways than the original longform.
Yeah, I think those were some of your last good posts / first bad posts.
Do you honestly think that rationalists will suddenly get your point if you say
with no explanation or argument at all, or even a link to your sparse lognormals sequence?
Or what about
This seems entirely unrelated to any of the points you made in sparse lognormals (that I can remember!), but I consider this too part of your recent vagueposting habit.
I really liked your past posts and comments, I’m not saying this to be mean, but I think you’ve just gotten lazier (and more “cranky”) in your commenting & posting, and do not believe you are genuinely ” probing for whether rationalists will get the problem if framed in different ways than the original longform.”
If you wanted to actually do that, you would at least link to the relevant sections of the relevant posts, or better, re-explain the arguments of those sections in the context of the conversation.
For me though, what would get me much more on-board with your thoughts are actual examples of you using these ideas to model things nobody else can model (mathematically!) in as broad a spectrum of fields as you claim. That, or a much more compact & streamlined argument.
I think this is the crux. To me after understanding these ideas, it’s retroactively obvious that they are modelling all sorts of phenomena. My best guess is that the reason you don’t see it is that you don’t see the phenomena that are failing to be modelled by conventional methods (or at least don’t understand how those phenomena related to the birds-eye perspective), so you don’t realize what new thing is missing. And I can’t easily cure this kind of cluelessness with examples, because my theories aren’t necessary if you just consider a single very narrow and homogenous phenomenon as then you can just make a special-built theory for that.
This may well be true (though I think not), but what is your argument about not even linking to your original posts? Or how often you don’t explain yourself even in completely unrelated subjects? My contention is that you are not lazily trying on a variety of different reframings of your original arguments or conclusions to see what sticks, and are instead just lazy.
I don’t know of anyone who seems to have understood the original posts, so I kinda doubt people can understand the point of them. Plus often what I’m writing about is a couple of steps removed from the original posts.
Part of the probing is to see which of the claims I make will seem obviously true and which of them will just seem senseless.
Then everything you say will seem either trivial or absurd because you don’t give arguments! Please post arguments for your claims!
But that would probe the power of the arguments whereas really I’m trying to probe the obviousness of the claims.
Ok, I will first note that this is different from what you said previously. Previously, you said “probing for whether rationalists will get the problem if framed in different ways than the original longform” but now you say “I’m trying to probe the obviousness of the claims.”. It’s good to note when such switches occur.
Second, you should stop making lazy posts with no arguments regardless of the reasons. You can get just as much, and probably much more information through making good posts, there is not a tradeoff here. In fact, if you try to explain why you think something, you will find that others will try to explain why they don’t much more often than if you don’t, and they will be pretty specific (compared to an aggregated up/down vote) about what they disagree with.
But my true objection is I just don’t like bad posts.
So it sounds like your general theory has no alpha over narrow theories. What, then, makes it any good? Is it just that its broad enough to badly model many systems? Then it sounds useful in every case where we can’t make any formal predictions yet, and you should give those examples!
This sounds like a bad excuse not to do the work.
It’s mainly good for deciding what phenomena to make narrow theories about.
Then give those examples!
Edit: and also back up those examples by actually making the particular model, and demonstrate why such models are so useful through means decorrelated with your original argument.
Why?
This is the laziness I’m talking about! Do you really not understand why it would be to your theory-of-everything’s credit to have some, any, any at all, you know, actual use?
How suspicious it is that when I ask for explicit concrete examples, you explain that your theory is not really about particular examples, despite that if your vague-posting is indeed applying your theory of everything to particular examples, we can derive the existence of circumstances you believe your theory can well model?
And that excuse being that its good at deciding what to make good theories about, you cannot think of one reason why I’d like to know what theories you think would be smart to make using this framework.
That is to say that this is a very lazy reply.
I can think of reasons why you’d like to know what theories would be smart to make using this framework, e.g. so you can make those theories instead of bothering to learn the framework. However, that’s not a reason it would be good for me to share it with you, since I think that’d just distract you from the point of my theory.
I do not think I could put my response here better than said did 7 years ago on a completely unrelated post, so I will just link that.
Thing is just from the conclusions it won’t be obvious that the meta-level theory is better. The improvement can primarily be understood in the context of the virtues of the meta-level theory.
idk what to say, this is just very transparently an excuse for you to be lazy here, and clearly crank-talk/cope.
More specifically, my position is anti-reductionist, and rationalist-empiricist-reductionists dismiss anti-reductionists as cranks. As long as you are trying to model whether I am that and then dismiss me if you find I am, it is a waste of time to try to communicate my position to you.
I am not dismissing you because of your anti-reductionism! Where did I say that? Indeed, I have been known to praise some “anti-reductionist” theories—fields even!
I’m dismissing you because you can’t give me examples of where your theory has been concretely useful!
You praise someone who wants to do agent-based models, but agent-based models are a reductionistic approach to the field of complexity science, so this sure seems to prove my point. (I mean, approximately all of the non-reductionistic approaches to the field of complexity science are bad too.)
I don’t care who calls themselves what, complexity science calls itself anti-reductionist, I don’t dismiss them. Therefore I can’t dismiss people just because they call themselves anti-reductionist, I must use their actual arguments to evaluate their positions.
I will also say that pleading to the community’s intrinsic bias and claiming I’ve made arguments I haven’t or have positions I don’t is not doing much to make me think you less a crank.
I’m not saying you’re dismissing me because I call myself anti-reductionist, I’m saying you’re dismissing me because I am an anti-reductionist.
I don’t think you’re using the actual arguments I presented in the LDSL series to evaluate my position.
I remember reading LDSL and not buying the arguments! At the time, I deeply respected you and your thinking, and thought “oh well I’m not buying these arguments, but surely if they’re as useful as they say, tailcalled will apply them to various circumstances and that will be pie on my face, and in that circumstance I should try to figure out why I was mistaken”. But then you didn’t, and you started vague-posting constantly, and now we’re here and you’re giving excuse after excuse of why its actually impossible for you to tell me any concrete application of your theory, and accusing me of anti-reductionist prejudice.
I admit, I do have an anti-reductionist prejudice, its called a prior, but its not absolute, and its not enough to stop listening to someone. I really, really, really don’t think I’m outright dismissing you because you’re anti-reductionist. I was totally willing to listen to you, even when you were making such arguments, and end up being wrong!
I even have the receipts to prove it! Until like just under a month ago, I was still emailed & lesswrong notified every time you made a post!
(they are unread, because I check LessWrong more commonly than my email)
I cannot stress enough, the reason why I’m dismissing you is because you stopped making arguments and started constantly vague-posting.
If you don’t have any puzzles within Economics/Sociology/Biology/Evolution/Psychology/AI/Ecology where it would be useful with a more holistic theory, then it’s not clear why I should talk to you.
I never said that, I am asking you for solutions to any puzzle of your choice! You’re just not giving me any!
Edit: I really honestly don’t know where you got that impression, and it kinda upsets me you seemingly just pulled that straight out of thin air.
Wouldn’t it be more impressive if I could point you to a solution to a puzzle you’ve been stuck on than if I present my own puzzle and give you the solution to that?
It would, but you didn’t ask for such a thing. Are you asking for such a thing now? If so, here is one in AI, which is on everyone’s minds: How do we interpret the inner-workings of neural networks.
I expect though, that you will say that your theory isn’t applicable here for whatever reason. Therefore it would be helpful if you gave me an example of what sort of puzzle your theory is applicable to.
“How do we interpret the inner-workings of neural networks.” is not a puzzle unless you get more concrete an application of it. For instance an input/output pair which you find surprising and want an interpretation for, or at least some general reason you want to interpret it.
Ok, then why do AI systems have so many adversarial examples? I have no formal model of this, though it plausibly makes some intuitive sense.
… can you pick some topic that you are good at instead of focusing on AI? That would probably make the examples more informative.
It sounds like, as I predicted, your theory doesn’t apply to the problems I presented, so how about you provide an example
The LDSL series provides quite a few everyday examples, but for some reason you aren’t satisfied with those. Difficult examples require that you’re good at something, so I might not be able to find an example for you.
Lets go through your sequence shall we? And enumerate the so-called “concrete examples” you list
[LDSL#0] Some epistemological conundrums
Here you ask a lot of questions, approximately each of the form “why do ‘people’ think <thing-that-some-people-think-but-certainly-not-all”. To list a few,
Seems to have a good answer. Sometimes they’re informative!
Seems also to have a good answer, it is easy to fool yourself if you do it improperly.
I would sure love a new closed-form way of modeling bag-like dynamics, as you describe them, if you have them! I don’t think you give one though, but surely if you mention it, you must have the answer somewhere!
Seems less a question than a claim? And I don’t think we need special math to solve this one.
None of these seem like concrete applications of your theory, but that’s fine. It was an intro post, you will surely explain all these later on, as worked examples at some point, right?
[LDSL#1] Performance optimization as a metaphor for life
I do remember liking this post! It was good. However, the conclusions here do not seem dependent on your overall conclusions.
[LDSL#2] Latent variable models, network models, and linear diffusion of sparse lognormals
Wait, I don’t think your previous post was about that? I certainly use statistics when doing performance optimization! In particular, I profile my code and look at which function calls are taking the bulk of the time, then optimize or decrease the number of calls to those.
Hey look a concrete example!
… well more like a motivating example. I’m sure at some point you build models and compare your model to those the epidemiologists have built… right?
[LDSL#3] Information-orientation is in tension with magnitude-orientation
This seems like a reasonable statistical argument, but of course, for our purposes, there are no real examples here, so let us move on.
[LDSL#4] Root cause analysis versus effect size estimation
Seems also a reasonable orientation, but by no means a theory of everything, and again no real examples here, so lets move on once again
[LDSL#5] Comparison and magnitude/diminishment
Your solution here to the problem you outline seems like a cop-out to me, and of course (other than the tank/dust example, which is by no means an example in the sense we’re talking about here), there are no examples.
[LDSL#6] When is quantification needed, and when is it hard?
Here you give the example of elo, but you don’t really provide any alternatives, and you mostly mention that picking bases when taking logarithms may be hard, so also doesn’t seem like an example.
Therefore, if it seemed like I didn’t read your sequence before (which I did! Just a while ago), I have certainly at least skimmed it now, and can say with relative confidence that no, you don’t in fact give concrete examples of circumstances where your theory performs better than the competition even once. At most you give some statistical arguments for why in some circumstances you may want to use various statistical tools. But this is by no means some theory of everything or even really much a steel-man for anti-reductionism.
You don’t even come back to the problems you originally listed! Where’s the promised theory of autism? Where’s the closed form model of bag-like dynamics? Where’s the steel-man of psychoanalysis, or the take-down of local validity and coherence, or the explanation of why commonsense reasoning avoids the principle of explosion?
This is the behavior of a lazy crack-pot, who doesn’t want to admit the fact that nobody is listening to them anymore because they’re just wrong. It is not the case that I’m just not good at anything enough to understand your oh-so-complex examples. You just don’t want to provide examples and would rather lie and say you’ve provided examples in the past, relying on your (false) assumption that I haven’t read what you’ve written, than actually list anything concrete.
This post has the table example. That’s probably the most important of all the examples.
That’s accounting, not statistics.
AFAIK epidemiologists usually measure particular diseases and focus their models on those, whereas LDSL would more be across all species of germs.
There is basically no competition. You just keep on treating it like the narrow domain-specific models count as competition when they really don’t because they focus on something different than mine.
I would honestly be interested in any concrete model you build based on this. You don’t necessarily have to compare it against some other field’s existing model, though it does help for credibility’s sake. But I would like to at least be able to compare the model you make against data.
I’m also not sure this is true about epidemiologists, and if it is I’d guess its true to the extent that they have like 4 different parameterizations of different types of diseases (likely having to do with various different sorts of vectors of spread), then they fit one of those 4 different parameterizations to the measured (or inferred) characteristics of a particular disease.
The most central aspect of my model is to explain why it’s generally not relevant to fit quantitative models to data.
Each disease (and even different strands of the same disease and different environmental conditions for the same strand) has its own parameters, but they don’t fit a model that contains all the parameters of all diseases at once, they just focus on one disease at a time.
Before you said
Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow “input/output pair” problems. Yet now you say
If I treat your theory this way, it is only because you did, 3 hours ago, when you believed I hadn’t read your post or would even give you the time of the day. You claimed “How do we interpret the inner-workings of neural networks.” was “not a puzzle unless you get [a?] more concrete application of it”, yet the examples you list in your first post are no more vague, and often quite a bit more vague than “how do you interpret neural networks?” or “why are adversarial examples so easy to find?” For example, the question “Why are people so insistent about outliers?” or “Why isn’t factor analysis considered the main research tool?”
For… what exactly? For theories of everything? Oh I assure you, there is quite a bit of competition there. For statistical modeling toolkits? Ditto. What exactly do you think the unique niche you are trying to fill is? You must be arguing against someone, and indeed you often do argue against many.
The relevance of zooming in on particular input/output problems is part of my model.
“Why are adversarial eamples so easy to find?” is a problem that is easily solvable without my model. You can’t solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.
“Why are people so insistent about outliers?” is not vague at all! It’s a pretty specific phenomenon that one person mentions a general theory and then another person says it can’t be true because of their uncle or whatever. The phrasing in the heading might be vague because headings are brief, but I go into more detail about it in the post, even linking to a person who frequently struggles with that exact dynamic.
As an aside, you seem to be trying to probe me for inconsistencies and contradictions, presumably because you’ve written me off as a crank. But I don’t respect you and I’m not trying to come off as credible to you (really I’m slightly trying to come off as non-credible to you because your level of competence is too low for this theory to be relevant/good for you). And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven’t just abandoned the conversation.
Theories of everything that explain why intelligence can’t model everything and you need other abilities.
I liked your old posts and your old research and your old ideas. I still have some hope you can reflect on the points you’ve made here, and your arguments against my probes, and feel a twinge of doubt, or motivation, pull on that a little, and end up with a worldview that makes predictions, lets you have & make genuine arguments, and gives you novel ideas.
If you were always lazy, I wouldn’t be having this conversation, but once you were not.
A lot of my new writing is as a result of the conclusions of or in response to my old research ideas.
Of course it is, I did not think otherwise, but my point stands.
No it doesn’t. I obviously understood my old posts (and still do—the posts make sense if I imagine ignoring LDSL). So I’m capable of understanding whether I’ve found something that reveals problems in them. It’s possible I’m communicating LDSL poorly, or that you are too ignorant to understand it, or that I’m overestimating how broadly it applies, but those are far more realistic than that I’ve become a pure crank. If you still prefer my old posts to my new posts, then I must know something relevant you don’t know.
What is the solution then?
I do think I’m “good at” AI, I think many who are “good at” AI are also pretty confused here.
I don’t really care what you think.