There are fake/fictional presidents in the training data.
JavierCC
[Question] Help to find a blog I don’t remember the name of
Which are those labs? OpenAI, Anthropic, DeepMind maybe?, what else?
The point that you brought up seemed to rest a lot on Hinton’s claims, so it seems that his opinions on timelines and AI progress should be quite important
Do you have any recent source on his claims about AI progress?
So in your model how much of the progress to AGI can be made just by adding more compute + more data + working memory + algorithms that ‘just’ keep up with the scaling?
Specifically, do you think that self-reflective thought already emerges from adding those?
Can you quote any source that provides evidence for that conclusion?
The process of evolution optimised the structures of the brain themselves through generations, the training is just equivalent to the development of the individual. The structures of the brain seem to not only be determined by development, but that’s one reason why I said “apparent complexity”, from Yudkowsky:
“Metacognitive” is the optimization that builds the brain—in the case of a human, natural selection; in the case of an AI, either human programmers or, after some point, the AI itself.
Yeah, but I would need more specificity than just giving an example of a brain with a different design.
[Question] how do short-timeliners reason about the differences between brain and AI?
But you’ve generalised your position on perspective beyond conscious beings. My understanding is that perspective is not reducible to non-perspective facts in the theory because the perspective is contingent, but nothing there explicitly refers to consciousness.
You can adopt mutatus mutandis a different perspective in the description of a problem and arrive to the right conclusion. There’s no appeal to a phenomenal perspective there.
The epistemic limitations of minds that map to the idea of a perspective-centric epistemology and metaphysics come from facts about brains.
Your claims about the limitations on knowing about consciousness and free will based on the primitivity of perspective seem to me pretty random.
The perspective that we are taking is a primitive, but I don’t understand why you connect that with consciousness given that the perspective is completely independent on any claims about it being conscious. I don’t see how to link both non-arbitrarily, the mechanisms of consciousness exist regardless of the perspective taken. The epistemic limitations come from facts about brains not from an underlying notion of perspective.
And in the case of free will, there’s no reason why we cannot have a third-person account of what we mean by free will. There’s no problematic loop.
People turn these things into agents easily already, and they already contain goal-driven subagent processes.
Sorry, what is this referring to exactly?
The video in the link is not available.
I don’t like the word “illusionism” here because people just get caught on the obvious semantic ‘contradiction’ and always complain about it.
The arguments based on perceptual illusions in general are meant to show that our perception is highly constructed by the brain, it’s not something ‘simple’. The point of illusionism is just to say that we are confused about what the phenomenological properties of qualia really are qua qualia because of wrong ideas that come from introspection.
I’ve been checking Joscha Bach’s blog these days, and I had found this blog post on the topic which I think goes to interesting depth on the question:
I’m curious, who is the man that says that it’s fine for AIs to replace humanity because there will be more interesting forms of consciousness?
To what extent are humans by themselves evidence of GI alignment, though? A human can acquire values that disagree with those of the humans that taught them those values just by having new experiences/knowledge, to the point of even desiring completely opposite things to their peers (like human progress VS human extinction), doesn’t that mean that humans are not robustly aligned?
Who knows, maybe it was your right hemisphere.
Shout-outs to them, if so. Almost definitely the first time someone has directly referred to them, that’s got to be very exciting.
Even if you are not literally their right hemisphere (not like you would know of course), but if you are there and if you have access to high-level knowledge of the world: hi, good job all of these years!
Do you think you have experienced a dissociative crisis at any point of your life? I mean the sensations of derealisation/depersonalisation, not other symptoms, and it doesn’t need to have been ‘strong’ at all.
I ask because those sensations are not in any obvious way about processing sensory data, and because of the feeling of detachment from reality that comes with them. So I was curious if you could identify anything like that.
But conscious states are strongly determined by brain states as far as we can check. The argument that people use to argue against fully identifying the two comes down to deriving the metaphysical nature of qualia from their phenomenological properties. It seems to me that is epistemically problematic to argue against objective claims with intuition about something that we cannot even contrast with anything. We just have our intuition about phenomenology, no conceivable way to track the processes behind the phenomenon from that intuition. This is the reason why people imagine qualia to be individual entities and then think they can remove them ceteris paribus, or that they can’t be tracked by Laplace’s demons.
Consciousness doesn’t need to be fundamentally distinct from non-consciousness. Rocks can’t monitor their own states at all, but computers can, that doesn’t mean that a fundamentally new property was added when you turn a rock into a computer. If we stop trying to derive metaphysics from phenomenology, the same account can be applied to consciousness. Then whatever processes track with what we feel consciousness to be will be trackable by a Laplace demon.
I don’t have any tips for this, but as a note, this and the idea that the self is not ‘real’ (that there’s no permanent Cartesian homunculus ‘experiencing’) caused me a lot of dread at the age of 13-14. Sometimes I stop to think about it and I’m amazed at how little it bothers me nowadays.