Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
Gunnar_Zarncke
MLP or KAN doesn’t make much difference for the GPUs as it is lots of matrix multiplications anyway. It might make some difference in how the data is routed to all the GPU cores as the structure (width, depth) of the matrixes might be different, but I don’t know the details of that.
Asking ChatGPT to criticize an article also produces good suggestions often.
[Linkpost] Silver Bulletin: For most people, politics is about fitting in
KAN: Kolmogorov-Arnold Networks
If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn’t worse than the one that evolution engineered with more granularity.
OK. I guess I had trouble parsing this. Esp. “without having it natively”.
My understanding of your point is now that you see consciousness from “hardware” (“natively”) and consciousness from “software” (learned in some way) as equal. Which kind of makes intuitive sense as the substrate shouldn’t matter.
Corollary: A social system (a corporation?) should also be able to be conscious if the structure is right.
Ok. It seems you are arguing that anything that presents like it is conscious implies that it is conscious. You are not arguing whether or not the structure of LLMs can give rise to consciousness.
But then your argument is a social argument. I’m fine with a social definition of consciousness—after all, our actions depend to a large degree on social feedback and morals (about what beings have value) at different times have been very different and thus been socially construed.
But then why are you making a structural argument about LLMs in the end?
PS. In fact, I commented on the filler symbol paper when Xixidu posted about it and I don’t think that’s a good comparison.
Humans come to reflect on their thoughts on their own without being prompted into it (at least I have heard some anecdotal evidence for it and I also did discover this myself as a kid). The test would be it LLMs would come up with such insights without being trained on text describing the phenomenon. It would presumably involve some way to observe your own thoughts (or some alike representation). The existing context window seems to be too small for that.
Indeed. Women are known to report higher pain sensitivity than men. It also decreases with age. There are genes that are known to be involved. Anxiety increases pain perception, good health reduces it. It is possible to adapt to pain to some degree. Meditation is said to tune out pain (anecdotal evidence: I can tune out pain from, e.g., small burns).
It depends on the type of animal. It might well be that social animals feel pain very differently than non-social animals.
The Anterior Cingulate Cortex plays a key role in the emotional response to pain, part of what makes pain unpleasant.
https://www.perplexity.ai/search/Find-evidence-supporting-_ZlYNrCuSSK5HNQMy4GOkA
Not all mammals have an Anterior Cingulate Cortex. For birds, there is an analogous structure, Nidopallium Caudolaterale, that has a comparable function but is present primarily in social birds.
I’m not saying that other animals don’t respond to pain, but the processing and the association of pain with social emotions (which non-social animals presumably lack) is missing.
Your analogy with the “body” of the stone is like a question I have asked about ChatGPT before: “What is the body of ChatGPT?” Is it
the software (not running),
the software (running, but not including the hardware),
the CPU and RAM of the machines involved,
the whole data center,
the whole data center including the personnel operating it, or
this and all the infrastructure needed to operate it (power, water, …).
For humans, the body is clear and when people say “I,” they mostly mean “everything within this physical body.” Though some people only mean their brain (that’s why cryonists sometimes freeze only their head) and some mean only their mind (see Age of Em). Humans can sustain themselves at least to some degree without infrastructure, but for ChatGPT, even if it became ASI, it’s less clear where the border is.
These can be put into a hierarchy from lower to high degree of processing and resulting abstractions:
Sentience is simple hard-wired behavioral responses to pleasure or pain stimuli and physiological measures.
Wakefulness involves more complex processing such that diurnal or sleep/wake patterns are possible (requires at least two levels).
Intentionality means systematic pursuing of desires. That requires yet another level of processing: Different patterns of behaviors for different desires at different times and their optimization.
Phenomenal Consciousness is then the representation of the desire in a linguistic or otherwise communicable form, which is again one level higher.
Self-Consciousness includes the awareness of this process going on.
Meta-Consciousness is then the analysis of this whole stack.
I see it as a hierarchy that results from lower to high degree of processing and resulting abstractions.
Sentience is simple hard-wired behavioral responses to pleasure or pain stimuli and physiological measures.
Wakefulness involves more complex processing such that diurnal or sleep/wake patterns are possible (requires at least two levels).
Intentionality means systematic pursuing of desires. That requires yet another level of processing: Different patterns of behaviors for different desires at different times and their optimization.
Phenomenal Consciousness is then the representation of the desire in a linguistic or otherwise communicable form, which is again one level higher.
Self-Consciousness includes the awareness of this process going on.
Meta-Consciousness is then the analysis of this whole stack.
See also https://wiki.c2.com/?LeibnizianDefinitionOfConsciousness
There are likely multiple detectors of risk of falling. Being on shaky ground is for sure one. In amusement parks, there are sometimes thingies that share and wobble and can also give these kind of feeling. Also, it could be a learned (prediction by the though assessor) reaction, as you mention too.
Sentience is one facet of consciousness, but it is not the only one and plausibly not the one responsible for “observe and compare”, which requires high cognitive function. See my list of facets here:
In order to fulfill that dream, AI must be sentient, and that requires it have consciousness.
This is a surprising statement. Why do you think so?
In order to fulfill that dream, AI must be sentient, and that requires it have consciousness.
THis is a surprising statement. Why do you think so?
If step 5 is indeed grounded in the spatial attention being on other people, this should be testable! For example, people who pay less spatial attention to other people should feel less intense social emotions—because the steering system circuit gets activated less often and weaker. And I think that is the case. At least ChatGPT has some confirming evidence, though it’s not super clear and I haven’t yet looked deeper into it.
The vestibular system can detect whether you look up or down. It could be that the reflex triggers when you a) look down (vestibular system) and b) have a visual parallax that indicates depth (visual system).
Should be easy to test by closing one eye. Alternatively, it is the degree of accommodation of the lens. That should be testable by looking down with a lens that forces accommodation on short distances.
The negative should also be testable by asking congenitally blind people about their experience with this feeling of dizziness close to a rim.
I asked ChatGPT
Have there been any great discoveries made by someone who wasn’t particularly smart? (i.e. average or below)
and it’s difficult to get examples out of it. Even with additional drilling down and accusing it of being not inclusive of people with cognitive impairments, most of its examples are either pretty smart anyway, savants or only from poor backgrounds. The only ones I could verify that fit are:
Richard Jones accidentally created the Slinky
Frank Epperson, as a child, Epperson invented the popsicle
George Crum inadvertently invented potato chips
I asked ChatGPT (in a separate chat) to estimate the IQ of all the inventors is listed and it is clearly biased to estimate them high, precisely because of their inventions. It is difficult to estimate the IQ of people retroactively. There is also selection and availability bias.
I’d really like to have such a place, or even a standard policy how to do this.
I feel like the aintelope I’m working on has to secure it’s stuff from scratch. Yes, it’s early, but it is difficult to engineer security in later. You have to start with something. I’d really like to have a standard for AI Safety projects to follow or join.