I couldn’t swallow Eliezer’s argument, I tried to read Guzey but couldn’t stay awake, Hanson’s argument made me feel ill, and I’m not qualified to judge Caplan.
Mitchell_Porter
Also astronomers: anything heavier than helium is a “metal”.
In Engines of Creation (“Will physics again be upended?”), @Eric Drexler pointed out that prior to quantum mechanics, physics had no calculable explanations for the properties of atomic matter. “Physics was obviously and grossly incomplete… It was a gap not in the sixth place of decimals but in the first.”
That gap was filled, and it’s an open question whether the truth about the remaining phenomena can be known by experiment on Earth. I believe in trying to know, and it’s very possible that some breakthrough in e.g. the foundations of string theory or the hard problem of consciousness, will have decisive implications for the interpretation of quantum mechanics.
If there’s an empirical breakthrough that could do it, my best guess is some quantum-gravitational explanation for the details of dark matter phenomenology. But until that happens, I think it’s legitimate to think deeply about “standard model plus gravitons” and ask what it implies for ontology.
In applied quantum physics, you have concrete situations (Stern-Gerlach experiment is a famous one), theory gives you the probabilities of outcomes, and repeating the experiment many times, gives you frequencies that converge on the probabilities.
Can you, or Chris, or anyone, explain, in terms of some concrete situation, what you’re talking about?
Congratulations to Anthropic for getting an LLM to act as a Turing machine—though that particular achievement shouldn’t be surprising. Of greater practical interest is, how efficiently can it act as a Turing machine, and how efficiently should we want it to act. After all, it’s far more efficient to implement your Turing machine as a few lines of specialized code.
On the other hand, the ability to be a (universal) Turing machine could, in principle, be the foundation of the ability to reliably perform complex rigorous calculation and cognition—the kind of tasks where there is an exact right answer, or exact constraints on what is a valid next step, and so the ability to pattern-match plausibly is not enough. And that is what people always say is missing from LLMs.
I also note the claim that “given only existing tapes, it learns the rules and computes new sequences correctly”. Arguably this ability is even more important than the ability to follow rules exactly, since this ability is about discovering unknown exact rules, i.e., the LLM inventing new exact models and theories. But there are bounds on the ability to extrapolate sequences correctly (e.g. complexity bounds), so it would be interesting to know how closely Claude approaches those bounds.
Standard model coupled to gravitons is already kind of a unified theory. There are phenomena at the edges (neutrino mass, dark matter, dark energy) which don’t have a consensus explanation, as well as unresolved theoretical issues (Higgs finetuning, quantum gravity at high energies), but a well-defined “theory of almost everything” does already exist for accessible energies.
OK, maybe I understand. If I put it in my own words: You think “consciousness” is just a word denoting a somewhat arbitrary conjunction of cognitive abilities, rather than a distinctive actual thing which people are right or wrong about in varying degrees, and that the hard problem of consciousness results from reifying this conjunction. And you suspect that LeCun in his own thinking e.g. denies that LLMs can reason, because he has added unnecessary extra conditions to his personal definition of “reasoning”.
Regarding LeCun: It strikes me that his best-known argument about the capabilities of LLMs rests on a mathematical claim, that in pure autoregression, the probability of error necessarily grows. He directly acknowledges that if you add chain of thought, it can ameliorate the problem… In his JEPA paper, he discusses what reasoning is, just a little bit. In Kahneman’s language, he calls it a system-2 process, and characterizes it as “simulation plus optimization”.
Regarding your path to eliminativism: I am reminded of my discussion with Carl Feynman last year. I assume you both have subjective experience that is made of qualia from top to bottom, but also have habits of thought that keep you from seeing this as ontologically problematic. In his case, the sense of a problem just doesn’t arise and he has to speculate as to why other people feel it; in your case, you felt the problem, until you decided that an AI civilization might spontaneously develop a spurious concept of phenomenal consciousness.
As for me, I see the problem and I don’t feel a need to un-see it. Physical theory doesn’t contain (e.g.) phenomenal color; reality does; therefore we need a broader theory. The truth is likely to sound strange, e.g. there’s a lattice of natural qubits in the cortex, the Cartesian theater is how the corresponding Hilbert space feels from the inside, and decohered (classical) computation is unconscious and functional only.
So long as generative AI is just a cognitive prosthesis for humans, I think the situation is similar to social media, or television, or print, or writing; something is lost, something is found. The new medium has its affordances, its limitations, its technicalities, it does create a new layer of idiocracy; but people who want to learn, can learn, and people who master the novelty, and becomes power users of the new medium, can do things that no one in history was previously able to do. In my opinion, humanity’s biggest AI problem is still the risk of being completely replaced, not of being dumbed down.
I would like to defer any debate over your conclusion for a moment, because that debate is not new. But this is:
I think one of the main differences in worldview between LeCun and me is that he is deeply confused about notions like what is true “understanding,” what is “situational awareness,” and what is “reasoning,” and this might be a catastrophic error.
This is the first time I’ve heard anyone say that LeCun’s rosy views of AI safety stem from his philosophy of mind! Can you say more?
Completely wrong conclusion—but can you also explain how this is supposed to relate to Yann LeCun’s views on AI safety?
AI futurists … We are looking for a fourth speaker
You should have an actual AI explain why it doesn’t want to merge with humans.
Would you say that you yourself have achieved some knowledge of what is true and what is good, despite irreducibility, incompleteness, and cognitive bias? And that was achieved with your own merely human intelligence. The point of AI alignment is not to create something perfect, it is to tilt the superhuman intelligence that is coming, in the direction of good things rather than bad things. If humans can make some progress in the direction of truth and virtue, then super-humans can make further progress.
Many people outside of academic philosophy have written up some kind of philosophical system or theory of everything (e.g. see vixra and philpapers). And many of those works would, I think, sustain at least this amount of analysis.
So the meta-question is, what makes such a work worth reading? Many such works boil down to a list of the author’s opinions on a smorgasbord of topics, with none of the individual opinions actually being original.
Does Langan have any ideas that have not appeared before?
“i ain’t reading all that
with probability p i’m happy for u tho
and with probability 1-p sorry that happened”
What things decrease blood flow to the brain?
I found an answer to the main question that bothered me, which is the relevance of a cognitive “flicker frequency” to suffering. The idea is that this determines the rate of subjective time relative to physical time (i.e. the number of potential experiences per second); and that is relevant to magnitude of suffering, because it can mean the difference between 10 moments of pain per second and 100 moments of pain per second.
As for the larger issues here:
I agree that ideally one would not have farming or ecosystems in which large-scale suffering is a standard part of the process, and that a Jain-like attitude which extends this perspective e.g. even to insects, makes sense.
Our understanding of pain and pleasure feels very poor to me. For example, can sensations be inherently painful, or does pain also require a capacity for wanting the sensation to stop? If the latter is the case, then avoidant behavior triggered by a damaging stimulus does not actually prove the existence of pain in an organism; it can just be a reflex installed by darwinism. Actual pain might only exist when the reflexive behavior has evolved to become consciously regulated.
black soldier flies… feel pain around 1.3% as [intensely] as us
At your blog, I asked if anyone could find the argument for this proposition. In your reply, you mention the linked report (and then you banned me, which is why I am repeating my question here). I can indeed find the number 0.013 on the linked page, and there are links to other documents and pages. But they refer to concepts like “welfare range” and “critical flicker-fusion frequency”.
I suppose what I would like to see is (1) where the number 0.013 comes from (2) how it comes to be interpreted as relative intensity of pain rather than something else.
Singularituri te salutant
You can imagine making a superintelligence whose mission is to prevent superintelligences from reshaping the world, but there are pitfalls, e.g. you don’t want it deeming humanity itself to be a distributed intelligence that needs to be stopped.
In the end, I think we need lightweight ways to achieve CEV (or something equivalent). The idea is there in the literature; a superintelligence can read and act upon what it reads; the challenge is to equip it with the right prior dispositions.
I offer, no consensus, but my own opinions:
0-5 years.
There will be a first ASI that “rules the world” because its algorithm or architecture is so superior. If there are further ASIs, that will be because the first ASI wants there to be.
Contingent.
For an ASI you need the equivalent of CEV: values complete enough to govern an entire transhuman civilization.
Offense wins.
It is possible, but would require all the great powers to be convinced, and every month it is less achievable, owing to proliferation. The open sourcing of Llama-3 400b, if it happens, could be a point of no return.
These opinions, except the first and the last, predate the LLM era, and were formed from discussions on Less Wrong and its precursors. Since ChatGPT, the public sphere has been flooded with many other points of view, e.g. that AGI is still far off, that AGI will naturally remain subservient, or that market discipline is the best way to align AGI. I can entertain these scenarios, but they still do not seem as likely as: AI will surpass us, it will take over, and this will not be friendly to humanity by default.