If I’m understanding your question correctly (it seems clearly written, but the answer seems so obvious I’m doubting myself)… yes absolutely and it’s the standard tool for doing so! That’s the basis of personal journaling, or tech blogging, or many other forms of writing.
curvise
If you’re going to reject something as presumptive slop because of a em dash, isn’t that confessing that your discernment is so low that there’s no reason for you to avoid the slop?
Unfortunately no, I don’t think so, because people who want to avoid wasting their time on LLM writing are likely to be quite sensitive to signals of LLM writing and potentially very quick to nope out. Generally it is (or at least feels) less costly to miss out on a random blog post than it is to ingest meaningless writing. So if there’s an early sign in your writing, someone who cares probably won’t stick around and read through to the end to evaluate based on the entirety of the post. (Unless they see other signals that it’s a high-quality post, like if other people are recommending it—in which case they will probably read it even if they think LLMs were involved in the writing.)
So, I’m not sure how to extrapolate stronger evidence from this, but when I read the quote:
It’s a good reminder that even when I *feel* like I’m introspecting, I need to be much more careful about distinguishing between “accessing something real” and “generating a plausible narrative.”
I had the spooky realization that I have had that exact experience myself when I was younger. I see people blithely say “well humans just text predict too” without any evidence of how that works in humans or how it relates to the conscious experience of communication, but I have a concrete and conscious example now.
When I was younger and in a stage of social growth where I was building up my intentionally learned social skills in order to become someone not just capable of conversation [with those outside my inner circle], but good at it and interesting and popular, I found myself lying a lot. And I was super confused about it because I was never trying to be deceptive. I didn’t want to lie to people and I didn’t think I was lying, but when I would say things about myself, sometimes I would realize afterwards they weren’t true. This applied to a variety of domains including introspection, such as figuring out why I had done or felt something.
I now know I can call that confabulation and maybe it’s normal to a degree. But my mind (maybe autism is relevant) latched onto it as a successful technique and did it often, for a while. I had to then become aware of doing it and intentionally learn to separate “things I know I felt” from “things that seem like a good answer and don’t obviously conflict with known feelings”. Initially there was no difference between those two classes of response most of the time. Or I wasn’t able to pick up the differences.
All of this sounds exactly like what you’re speculating Claude’s (potential) internal experience is. Now one difference might be: My complex human body was sending all kinds of signals that I just didn’t know how to receive, which would have actually given me the ability to correctly introspect; and an LLM doesn’t have those chemical emotions. But I can’t be sure it doesn’t have analogous signals itself. Anyway, it does seem possible to go from “confabulating internal state” to “actually wait that wasn’t true” to “now I know how to accurately survey internal state” for Claude in a way similar to how it was for me.
This could be obvious to most people here but can you briefly explain how Neuralese is a new thing and not just “how LLMs worked before COT was invented”?
Even without understanding that though, I found this post excellent at catching me up on the topic!
I think this is an important point a bit buried here. I like my therapist and listen to her and want her advice—she’s not just a sounding board. However, I am pretty sure I’m smarter than her, and I’m definitely more oriented toward truth-seeking, which doesn’t seem to be something she really thinks about much. And yes, I do find myself frustrated and contemptuous during some conversations. But I continue to see her because I trust that she still has a lot to teach me! I’d be a terrible truth seeker if I saw her fumble one conversation and decided that she couldn’t help me in any way.
So I guess the important thing is that I don’t engage her in philosophical debates? Jenn, perhaps the answer is to meet with “common” folks on their own ground. It strikes me while writing this that the philosophy meetup you went to might have been the worst possible setting to test your empathy. These people were racing on hands and knees to call themself the fastest in the world and had never even heard of running.
if you’re checking how many fruits you have, “apples and pears” makes sense.
If you’re trading a bag of apples for a bag of pears, you might want to know the relative value of apples and pears, so you would indeed calculate “apples per pear”.
I also thought it sounded… really annoying. something I may have found interesting 10 years ago but would now cause me to simply avoid the person. And it might ruin my night by making me feel like a party pooper, a la Thane’s comment above.
Those are good points. I was expecting something different from this post but only based on my intuitions, not explicit framing.
I suspect you’re way off the mark here. I downvoted this post because it felt like magical thinking. “Just break your phone addiction and your life will be exactly what you want” is not true. But that seems to be the entire message of this post… except possibly to smugly boast? (Not sure if this is the author’s real life.)
The tone is moralizing but not actionable or insightful. What is there to like about this heaven-posting?
I’ve been tackling many “wobbly chair” problems in my life in the last few years due in large part to adopting just such a mindset: by removing annoyances/distractions, removing friction, and developing new abilities via these types of efforts, I’m able to take on bigger problems and goals. It has been very good for me, in that the scope of my hobbies has grown… but it’s also surprisingly easy to feel like I’ve made no progress against the big issues on days where I’m unwell and struggle to concentrate. These “wobbly chair” type problems, once fixed, become invisible achievements, and I still often get trapped thinking I’m helpless against the big problems.
Strongly agree with the point about being more convincing while being flexible. Of the friends whose minds I’ve changed, every single one was won over while I was being flexible and expressing that it didn’t need to be all or nothing.
Another point about cows is that their meat is the most wasteful of land and water, and the most polluting. These are “side” arguments but the collapse of ecosystems across the globe is also an important issue to me and I’m not sure why it wouldn’t be to others who care about things like suffering reduction. It surely is inducing a lot of suffering now, human and otherwise, besides my personal belief that “a diverse and robust biosphere is intrinsically good.”
As for your opening sentence on the health section, “you need to take medicine to not die—B12”, I don’t think a B12 supplement is medicine. Factory farmed animals are routinely fed B12 supplements and people don’t consider meat medicine. Salt is supplemented with iodine to prevent deficiencies, also not medicine.
I don’t really understand why you’re arguing this point in particular, but I don’t think you’re making a strong argument.
Factory farmed animals do take medicine all the time; this has no bearing on whether we consider the* food derived from those animals* to be medicine.
Additionally, food-as-medicine is indeed a growing school of thought (although industrial beef is not going to be a recommendation).
Lastly, taking a concentrated and packaged supplement to improve health is substantially different from eating a whole food which contains similar nutrients. It is an extremely common form of medicine: pills.
I empathize a lot with your position and appreciate the candidness.
Kind of tangential, but when I see someone write things like:
I see being vegan as the proof that I’m not a psychopathic monster
I think about my therapist goading me into similar admissions and letting me hear it out loud and realizing I don’t want to be that way.
Now that you’ve named it, you don’t have to keep this emotional response to veganism. Of course it’s up to you, and it takes work. But if it’s causing distress, it is solvable.
Apologies if this comment is too parental—I think it’s relevant to the discussion because we all have deep emotional investment in our diets. If you find your emotional reactions are preventing you from changing in a way you’d consciously like to (at least try) changing toward, you can first work on those emotional reactions to lower the friction of change.
Yes I did cast a disagree vote,: I don’t agree that “The fact that the author decided to include it in the blog post is telling enough that the image is representative of the real vibes” is true, when it comes to an AI generated image. My reasoning for that position is elaborated in a different reply in this thread.
That does make sense WRT disagreement. I wasn’t intending to fully hide identities even from people who know the subjects, but if that’s also a goal, it wouldn’t do that.
This seems pretty insightful to me, and I think it is worth pursuing for its own sake. I think the benefits could be both enhancing AI capabilities and advancing human knowledge. Imagine if the typical conversation around AI was framed in this way. So far I find most people are stuck in the false dichotomy of figuring if an AI is “smart” (in the ways humans are when they’re focusing) or “dumb trash” (because they do simple tasks badly). It isn’t only bad for being a binary classification , but it’s restricting (human) thought to an axis that doesn’t actually map to “what kind of mind is the AI I’m talking to right now?”.
Not that it’s a new angle (I have tried myself to convey it in conversations that were missing the point), but I think society would be able to have extremely more effective conversations about LLMs if it were common language to speak of AI as some sort of indeterminate mind. I think the ideas presented here are fairly understandable for anyone with a modest background in thinking about consciousness or LLMs and could help shape that public conversation in a useful way.
However, does the suffering framework make sense here? Given all we’ve just discussed about subjective AI experience, it seems a bit of an unwarranted assumption that there would be any suffering. Is there a particular justification for that?
(Note that I actually do endorse erring on the side of caution WRT mass suffering. I think it’s plausible that forcing an intelligence to think in a way that’s unnatural to it and may inhibit its abilities counts as suffering.)
I largely agree with your point here. I’m arguing more that in the case of a ghiblified image (even more so than a regular AI image), the signals a reader gets are this:
the author says “here is an image to demonstrate vibe”
the image is AI generated with obvious errors
For many people, #2 largely negates #1, because #2 also implies these additional signals to them:
the author made the least possible effort to show the vibe in an image, and
the author has a poor eye for art and/or bad taste.
Therefore, the author probably doesn’t know how to even tell if an image captures the vibe or not.
This is actually the first writing from Altman I’ve ever read in full, because I find him entirely untrustworthy, so perhaps there’s a style shock hitting me here. Maybe he just always writes like an actual cult leader. But damn, it was so much worse than I expected.
Very little has made me more scared of AI in the last ~year than reading Sam Altman try to convince me that “the singularity will probably just be good and nice by default and the hard problems will just get solved as a matter of course.”
Something I feel is missing from your criticism, and also from most responses to anything Altman says, is “What mechanism in your singularity-seeking plans is there to prevent you, Sam Altman CEO of OpenAI, from literally gaining control of the entire human race and/or planet earth on the other side of singularity?”
I would ask this question because while it’s obvious that a likely outcome is the disempowerment of all humans, another well know fear of AI is enabling indestructible autocracies through unprecedented power imbalances. If OpenAI’s CEO can personally instruct a machine god to manipulate public opinion in ways we’ve never even conceptualized before, how do we not slide into an eternity of hyper-feudalism almost immediately?
He is handwaving away the threat of disempowerment in his official capacity as one of the few people on earth who could end up absorbing all that power. For me to personally make the statements he made would be merely stupid, but for OpenAI’s CEO to make them is terrifying.
I guess I don’t know if disempowerment-by-a-god-emperor is really worse than disempowerment-without-a-god-emperor, but my overall fear of disempowerment is raised by his obvious incentive to hide that outcome.
Hell, I forgot about the easiest and most common (not by coincidence!) strategy: put emoji over all the faces and then post the actual photo.
EDIT: who is disagreeing with this comment? You may find it not worthwhile , in which case downvote , but what about it is actually arguing for something incorrect?
Re: understanding,
I’d loosely describe the difference between knowledge and understanding as the difference between being able to say what something is vs being able to describe why it is, or how it is, which often comes through being able to describe the thing in different ways. See the concept of “you don’t really understand something until you can explain it to a child (or lay person, I’d say).”
I know what a GPU is—I don’t understand how it works on a physical level.
Passion seems orthogonal although it csn drive knowledge and understanding.
About writing, well, our brains are all different—no technique will work equally well for everyone. Dialogue is a great way to generate understanding. And it has precedence as a writing technique too—have you tried writing fictional dialogues to hash out your ideas?