“How do we interpret the inner-workings of neural networks.” is not a puzzle unless you get more concrete an application of it. For instance an input/output pair which you find surprising and want an interpretation for, or at least some general reason you want to interpret it.
Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow “input/output pair” problems. Yet now you say
You just keep on treating it like the narrow domain-specific models count as competition when they really don’t because they focus on something different than mine.
If I treat your theory this way, it is only because you did, 3 hours ago, when you believed I hadn’t read your post or would even give you the time of the day. You claimed “How do we interpret the inner-workings of neural networks.” was “not a puzzle unless you get [a?] more concrete application of it”, yet the examples you list in your first post are no more vague, and often quite a bit more vague than “how do you interpret neural networks?” or “why are adversarial examples so easy to find?” For example, the question “Why are people so insistent about outliers?” or “Why isn’t factor analysis considered the main research tool?”
There is basically no competition.
For… what exactly? For theories of everything? Oh I assure you, there is quite a bit of competition there. For statistical modeling toolkits? Ditto. What exactly do you think the unique niche you are trying to fill is? You must be arguing against someone, and indeed you often do argue against many.
Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow “input/output pair” problems. Yet now you say
The relevance of zooming in on particular input/output problems is part of my model.
If I treat your theory this way, it is only because you did, 3 hours ago, when you believed I hadn’t read your post or would even give you the time of the day. You claimed “How do we interpret the inner-workings of neural networks.” was “not a puzzle unless you get [a?] more concrete application of it”, yet the examples you list in your first post are no more vague, and often quite a bit more vague than “how do you interpret neural networks?” or “why are adversarial examples so easy to find?” For example, the question “Why are people so insistent about outliers?” or “Why isn’t factor analysis considered the main research tool?”
“Why are adversarial eamples so easy to find?” is a problem that is easily solvable without my model. You can’t solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.
“Why are people so insistent about outliers?” is not vague at all! It’s a pretty specific phenomenon that one person mentions a general theory and then another person says it can’t be true because of their uncle or whatever. The phrasing in the heading might be vague because headings are brief, but I go into more detail about it in the post, even linking to a person who frequently struggles with that exact dynamic.
As an aside, you seem to be trying to probe me for inconsistencies and contradictions, presumably because you’ve written me off as a crank. But I don’t respect you and I’m not trying to come off as credible to you (really I’m slightly trying to come off as non-credible to you because your level of competence is too low for this theory to be relevant/good for you). And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven’t just abandoned the conversation.
For… what exactly? For theories of everything? Oh I assure you, there is quite a bit of competition there. For statistical modeling toolkits? Ditto. What exactly do you think the unique niche you are trying to fill is? You must be arguing against someone, and indeed you often do argue against many.
Theories of everything that explain why intelligence can’t model everything and you need other abilities.
And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven’t just abandoned the conversation.
I liked your old posts and your old research and your old ideas. I still have some hope you can reflect on the points you’ve made here, and your arguments against my probes, and feel a twinge of doubt, or motivation, pull on that a little, and end up with a worldview that makes predictions, lets you have & make genuine arguments, and gives you novel ideas.
If you were always lazy, I wouldn’t be having this conversation, but once you were not.
No it doesn’t. I obviously understood my old posts (and still do—the posts make sense if I imagine ignoring LDSL). So I’m capable of understanding whether I’ve found something that reveals problems in them. It’s possible I’m communicating LDSL poorly, or that you are too ignorant to understand it, or that I’m overestimating how broadly it applies, but those are far more realistic than that I’ve become a pure crank. If you still prefer my old posts to my new posts, then I must know something relevant you don’t know.
“Why are adversarial eamples so easy to find?” is a problem that is easily solvable without my model. You can’t solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.
Before you said
Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow “input/output pair” problems. Yet now you say
If I treat your theory this way, it is only because you did, 3 hours ago, when you believed I hadn’t read your post or would even give you the time of the day. You claimed “How do we interpret the inner-workings of neural networks.” was “not a puzzle unless you get [a?] more concrete application of it”, yet the examples you list in your first post are no more vague, and often quite a bit more vague than “how do you interpret neural networks?” or “why are adversarial examples so easy to find?” For example, the question “Why are people so insistent about outliers?” or “Why isn’t factor analysis considered the main research tool?”
For… what exactly? For theories of everything? Oh I assure you, there is quite a bit of competition there. For statistical modeling toolkits? Ditto. What exactly do you think the unique niche you are trying to fill is? You must be arguing against someone, and indeed you often do argue against many.
The relevance of zooming in on particular input/output problems is part of my model.
“Why are adversarial eamples so easy to find?” is a problem that is easily solvable without my model. You can’t solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.
“Why are people so insistent about outliers?” is not vague at all! It’s a pretty specific phenomenon that one person mentions a general theory and then another person says it can’t be true because of their uncle or whatever. The phrasing in the heading might be vague because headings are brief, but I go into more detail about it in the post, even linking to a person who frequently struggles with that exact dynamic.
As an aside, you seem to be trying to probe me for inconsistencies and contradictions, presumably because you’ve written me off as a crank. But I don’t respect you and I’m not trying to come off as credible to you (really I’m slightly trying to come off as non-credible to you because your level of competence is too low for this theory to be relevant/good for you). And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven’t just abandoned the conversation.
Theories of everything that explain why intelligence can’t model everything and you need other abilities.
I liked your old posts and your old research and your old ideas. I still have some hope you can reflect on the points you’ve made here, and your arguments against my probes, and feel a twinge of doubt, or motivation, pull on that a little, and end up with a worldview that makes predictions, lets you have & make genuine arguments, and gives you novel ideas.
If you were always lazy, I wouldn’t be having this conversation, but once you were not.
A lot of my new writing is as a result of the conclusions of or in response to my old research ideas.
Of course it is, I did not think otherwise, but my point stands.
No it doesn’t. I obviously understood my old posts (and still do—the posts make sense if I imagine ignoring LDSL). So I’m capable of understanding whether I’ve found something that reveals problems in them. It’s possible I’m communicating LDSL poorly, or that you are too ignorant to understand it, or that I’m overestimating how broadly it applies, but those are far more realistic than that I’ve become a pure crank. If you still prefer my old posts to my new posts, then I must know something relevant you don’t know.
What is the solution then?