Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
Gunnar_Zarncke
There is more than one possibility when you append “if you know what I mean” to the end of a random sentence:
Sexual innuendos.
Illicit activities or behaviors.
Inside jokes or references understood only by a specific group.
Subtle insults or mocking.
Sure, the first is the strongest, but the others would move the centroid away from “phallus”. The centroid is not at the most likely item but at the average.
I’m generally considered a happy person and I did couple’s counseling at a time when my partner was also happy. That was in the context of getting early marriage advice and was going generally well. I’m not sure about talk therapy. I’m generally of the opinion that talking with people helps with resolving all kinds of issues.
Cognition Labs released a demo of Devin an “AI coder”, i.e., an LLM with agent scaffolding that can build and debug simple applications:
https://twitter.com/cognition_labs/status/1767548763134964000
Thoughts?
With a sufficiently strong LLM, I think you could still elicit reports of inner dialogs if you prompt lightly, such as “put yourself into the shoes of...”. That’s because inner monologs are implied in many reasoning processes, even if not explicitly mentioned so.
As the respirator still has to breathe regularly, there will be still a significantly higher CO2 in the air for respiration. I’d guess maybe half − 20k PPM. Interesting to see somebody measure that.
Disregarding the looking silly, there are many other (small) downsides of wearing a helmet all the time:
the weight may have an adverse effect on your neck
you may get stuck on obstacles such as door frames
you may hit other people with it (who presumably don’t wear it and if they do see 2
it interferes with close personal interactions, such as hugging
...
Sam Altman once mentioned a test: Don’t train an LLM (or other AI system) on any text about consciousness and see if the system will still report having inner experiences unprompted. I would predict a normal LLM would not. At least if we are careful to remove all implied consciousness, which excludes most texts by humans. But if we have a system that can interact with some environment, have some hidden state, observe some of its own hidden state, and can maybe interact with other such systems (or maybe humans, such as in a game), and train with self-play, then I wouldn’t be surprised if it would report inner experiences.
It might be that we know a language that originally didn’t have personal pronouns: Pirahã. And a culture with a high value on no-coercion, which means that expectations of conforming are absent. There is an aspect of consciousness—the awareness of the difference between expected and actual behaviors—that might just not develop in such a context.
There is no problem with “I”—it makes sense to refer to the human speaking as “I”. The problem is with ascribing non-physical irreducible causality. Blame and responsibility are (comparatively) effective coordination mechanisms, that’s why societies that had it outcompeted those that didn’t. It doesn’t matter that the explanation is non-physical.
This is subject to the refutation, what experiences that illusion?
I can retort: Yes, what is that thing that experiences itself in humans? You don’t seem to have an answer.
Clearly, a process of experiencing is going on in humans. I don’t dispute that. But that is strictly a different argument.
Neither are humans doing “the same thing”, i.e. pretending to be conscious to follow the lead of everyone else pretending to be conscious.
You think so, but you would if you had learned to do so from early childhood. The same as with many other collective misinterpretations of reality.
There is no way for such a collective pretence to get started.
There is. Behaving as if people were conscious may be useful for collaboration. Interpreting other humans as deliberate agents is a useful, even natural abstraction.
This is the refutation of p-zombies.
No. I don’t claim that there could be p-zombies. Again: A process of experiencing does go on.
There could be a few who genuinely are not conscious, and have not realised that people mean literally what they say when they talk about their thoughts. But it can’t be all of us, or even most of us.
Sure, some people do not reflect much. Humans Who Are Not Concentrating Are Not General Intelligences. But again: Misinterpretations of reality are common, exp. if they are useful. See: Religion. I think original Buddhism (Pali Canon) has come closest to what goes on in the mind.
This indicates that how we breathe plays a big role in CO2 uptake. Like, shallow or full, small or large volumes, or the speed of exhaling. Breathing technique is a key skill of divers and can be learned. I just started reading the book Breath, which seems to have a lot on it.
Ah, very related: Exhaled air contains 44000 PPM CO2 and is used for Mouth-to-mouth resuscitation without problems.
On the other hand, humans are doing the same thing. Consciousness (at least some aspects of it) could plausibly be a useful illusion too.
I agree that there is a difference between LLMs and humans, at least in that humans learn online while LLMs learn in batch, but that’s a small difference. We need to find a better answer on how to address the ethical questions.
Ethically, I’m OK with these experiments right now.
I’m not ignoring them. I’m just comparing danger base rates. That’s why “generally”. The benefits of each activity depend on the user.
Ah, yes. I agree that motorcycles are more dangerous than bicycles. Generally, avoiding dangerous activities (like those toward the lower end of this chart) seems like a good idea.
Agree. Please ask.
Minor nitpick:
You wouldn’t do anything absurdly dangerous, like take unknown drugs or ride a bike without a helmet.
Riding a bike without a helmet is not “absurdly dangerous”. Mostly because riding a bike is not very dangerous, to begin with (unless you are doing absurdly dangerous stunts with it, but then it’s a different question and anything that reduces injuries helps). Helmets do reduce injuries by a factor of about three.
The list is short enough to repost in full here. Makes it easier to comment.
[as requested reposted in this thread]
On noticing distractions, meditation practice, esp. the Buddhist one based on the Pali Canon has some interesting concepts that you may find interesting:
Subtle Distraction (sukhuma vicāra) is a nuanced mental activity that doesn’t completely pull attention away from the meditation object but dilutes the focus. It is the hardest to notice until mastering mindfulness (sati) and clear comprehension (sampajañña). Note that vicāra translates to applied thought or examination and refers to the attention on something else.
Gross Distraction (oḷārika vicāra) is the distraction we sometimes catch, e.g., when we notice that we skipped a sentence in a book, and then return to the text. It is the mind’s tendency to engage with sensory or mental phenomena that significantly divert attention away from the object, esp. the often comparatively boring meditation object.
Forgetting (vicikicchā or musitasmim) happens when we lose the (meditation) object from our attention altogether. Only a while later we realize that we are still holding the book or sitting on our pillow. Musitasmim translates to forgetting or negligence—the object has slipped from short-term memory by not concentrating. Another term, vicikicchā means doubt, which indicates that the purpose of our action was not strong enough to motivate us and we—at least subconsciously—doubted the value.
In The Mind Illuminated, dealing with these levels of distraction is a core aspect of the early levels of meditation practice. Here is an illustration from the book:
There are forums online that discuss the practice of noticing distractions based on the book. Here is one random example.
Here is the part of my notes from meditation retreats in 2019 and 2022 that summarizes the practice of concentration meditation and dealing with distractions—in a short Sazen: Notice, name, accept, and return to the object.
Would it be possible to determine the equivalent dimension of a layer of the human language cortex with this method? You can’t do API calls to a brain, but you can prompt people and estimate the probability of a response token by repeated sampling, maybe from different people.