If there is anything you missed about an LLM’s ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it’s a common misconception especially among certain varieties of ’tistics.
But if I could correct you, when I talk about naturally adept systems engineers, I’m talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don’t have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn’t look into it. The reality is, it’s a matter of survival, so you’re not going to be able to argue it away. You’re trying to convince a miner that the canary doesn’t die first.
An LLM does far more than “simplify” for me—it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that’s why this community holds such strong beliefs about intellectual gatekeeping. It’s a terrible system if you think about it, because we’re at a time in history where we can’t afford to waste cognitive resources.
I’m going to assume you’ve heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.
Try being concise.
It’s harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?
TBH, I’m not really interested in opinions or arguments about the placebo effect. I’m interested in data, and I’ve seen enough of that to invalidate what you just shared. I just can’t remember where I saw it, so you’re going to have to do your own searching. But that’s okay; it’ll be good for your algorithm.
If there was a way to prompt that implemented the human brain’s natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?
If everything you thought you knew about the world was gravely wrong,would you want to know?
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
If you don’t think there’s anything you missed, that means you won’t look for anything you missed.
That’s how echo chambers happen.
(Look how few sentences were needed to communicate that. Incredible.)
This entire community relies on tautological posturing to protect itself from uncomfortable criticisms… But communicating efficiently is the skill needed to slow AI development and rein in existential risks. Using many big words to call people stupid won’t cut it. If you can’t figure out how to do this, that means you DO need LLM assistance. But to get it, you’d have to stop asking it to simplify your concepts and think in terms of translation instead.
Try consensus.app to find the data I referenced. Because if you can think having ADHD means you understand the limitations and advantages of everyone who has ADHD, you don’t know nearly enough about psychology or human bias. The placebo effect owns you, and it’s using your intellectual narcissism as a leash.
Make sure you phrase your inquiry to disprove your own priors. Active auto-skepticism the only way to self-reflect appropriately, counteract bias, and arrive at truer conclusions.
I think “intellectual narcissism” describes you better than I, given how convinced you are that anyone who disagrees with you must have something wrong with them.
As I already told you, I know how LLMs work, and have interacted with them extensively. If you have evidence of your claims you are welcome to share it, but I currently suspect that you don’t.
Your difficulty parsing lengthy texts is unfortunate, but I don’t really have any reason to believe your importance to the field of AI safety is such that its members should be planning all their communications with you in mind.
Consensus.app is a search engine. If you had evidence to hand you would not be directing me to a search engine. (Even if you did, I’m skeptical it would convince me; your standards of evidence don’t seem to be the same as mine, so I’m not convinced we would interpret it in the same way).
Having ADHD makes me well-qualified to observe that it does not give you natural aptitude at systems engineering. If you’re good at systems engineering, that’s great, but it’s not a trait inherent to ADHD.
Evidence for the placebo effect is very bad. I shared two posts which explained at length why the evidence for it is not as good as popularly believed. The fact that you have not updated on them leads me to think negatively of your epistemics.
I agree that it’s good to be skeptical of your beliefs! I don’t think you’re doing that.
If there is anything you missed about an LLM’s ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it’s a common misconception especially among certain varieties of ’tistics.
But if I could correct you, when I talk about naturally adept systems engineers, I’m talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don’t have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn’t look into it. The reality is, it’s a matter of survival, so you’re not going to be able to argue it away. You’re trying to convince a miner that the canary doesn’t die first.
An LLM does far more than “simplify” for me—it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that’s why this community holds such strong beliefs about intellectual gatekeeping. It’s a terrible system if you think about it, because we’re at a time in history where we can’t afford to waste cognitive resources.
I’m going to assume you’ve heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.
Try being concise.
It’s harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?
TBH, I’m not really interested in opinions or arguments about the placebo effect. I’m interested in data, and I’ve seen enough of that to invalidate what you just shared. I just can’t remember where I saw it, so you’re going to have to do your own searching. But that’s okay; it’ll be good for your algorithm.
If there was a way to prompt that implemented the human brain’s natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?
If everything you thought you knew about the world was gravely wrong, would you want to know?
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
If you don’t think there’s anything you missed, that means you won’t look for anything you missed.
That’s how echo chambers happen.
(Look how few sentences were needed to communicate that. Incredible.)
This entire community relies on tautological posturing to protect itself from uncomfortable criticisms… But communicating efficiently is the skill needed to slow AI development and rein in existential risks. Using many big words to call people stupid won’t cut it. If you can’t figure out how to do this, that means you DO need LLM assistance. But to get it, you’d have to stop asking it to simplify your concepts and think in terms of translation instead.
Try consensus.app to find the data I referenced. Because if you can think having ADHD means you understand the limitations and advantages of everyone who has ADHD, you don’t know nearly enough about psychology or human bias. The placebo effect owns you, and it’s using your intellectual narcissism as a leash.
Make sure you phrase your inquiry to disprove your own priors. Active auto-skepticism the only way to self-reflect appropriately, counteract bias, and arrive at truer conclusions.
I think “intellectual narcissism” describes you better than I, given how convinced you are that anyone who disagrees with you must have something wrong with them.
As I already told you, I know how LLMs work, and have interacted with them extensively. If you have evidence of your claims you are welcome to share it, but I currently suspect that you don’t.
Your difficulty parsing lengthy texts is unfortunate, but I don’t really have any reason to believe your importance to the field of AI safety is such that its members should be planning all their communications with you in mind.
Consensus.app is a search engine. If you had evidence to hand you would not be directing me to a search engine. (Even if you did, I’m skeptical it would convince me; your standards of evidence don’t seem to be the same as mine, so I’m not convinced we would interpret it in the same way).
Having ADHD makes me well-qualified to observe that it does not give you natural aptitude at systems engineering. If you’re good at systems engineering, that’s great, but it’s not a trait inherent to ADHD.
Evidence for the placebo effect is very bad. I shared two posts which explained at length why the evidence for it is not as good as popularly believed. The fact that you have not updated on them leads me to think negatively of your epistemics.
I agree that it’s good to be skeptical of your beliefs! I don’t think you’re doing that.