In a world where AI progress has wildly accelerated chip manufacture
This world?
In a world where AI progress has wildly accelerated chip manufacture
This world?
What distinction are you making between “visualising” and “seeing”?
Good question! By “seeing” I meant having qualia, an apparent subjective experience. By “visualizing” I meant...something like using the geometric intuitions you get by looking at stuff, but perhaps in a philosophical zombie sort of way? You could use non-visual intuitions to count the vertices on a polyhedron, like algebraic intuitions or 3D tactile intuitions (and I bet blind mathematicians do). I’m not using those. I’m thinking about a wireframe image, drawn flat.
I’m visualizing a rhombicosidodecahedron right now. If I ask myself “The pentagon on the right and the one hiding from view on the left—are they the same orientation?”, I’ll think “ahh, let’s see… The pentagon on the right connects through the squares to those three pentagons there, which interlock with those 2⁄4 pentagons there, which connect through squares to the one on the left, which, no, that left one is upside-down compared to the one on the right—the middle interlocking pentagons rotated the left assembly 36° compared to the right”. Or ask “that square between the right pentagon and the pentagon at 10:20 above it <mental point>. Does perspective mean the square’s drawn as a diamond, or a skewed rectangle, weird quadrilateral?” and I think “Nah, not diamond shaped—it’s a pretty rectangular trapezoid. The base is maybe 1.8x height? Though I’m not too good at guessing aspect ratios? Seems like I if I rotate the trapezoid I can fit 2 into the base but go over by a bit?”
I’m putting into words a thought process which is very visual, BUT there is almost no inner cinema going along with those visualizations. At most ghostly, wispy images, if that. A bit like the fleeting oscillating visual feeling you get when your left and right eyes are shown different colors?
...I do not believe this test. I’d be very good at counting vertices on a polyhedron through visualization and very bad at experiencing the sensation of seeing it. I do “visualize” the polyhedra, but I don’t “see” them. (Frankly I suspect people who say they experience “seeing” images are just fooling themselves based on e.g. asking them to visualize a bicycle and having them draw it)
Thanks for crossposting! I’ve highly appreciated your contributions and am glad I’ll continue to be able to see them.
Quick summary of a reason why constituent parts like of super-organisms, like the ant of ant colonies, the cells of multicellular organisms, and endosymbiotic organelles within cells[1] are evolutionarily incentivized to work together as a unit:
Question: why do ants seem to care more about the colony than themselves? Answer: reproduction in an ant colony is funneled through the queen. If the worker ant wants to reproduce its genes, it can’t do that by being selfish. It has to help the queen reproduce. Genes in ant workers have nothing to gain by making their ant more selfish and have much to gain by making their worker protect the queen.
This is similar to why cells in your pancreas cooperate with cells in your ear. Reproduction of genes in the body is funned through gametes. Somatic evolution does pressure the cells in your pancreas to reproduce selfishly at the expense of cells in your ear (this is pancreatic cancer). But that doesn’t help the pancreas genes long term. Pancreas-genes and the ear-genes are forced to cooperate with each other because they can only reproduce when bound together in a gamete.
This sort of bounding together of genes making disperate things cooperate and act like a “super organism” is absent in members of a species. My genes do not reproduce in concert with your genes. If my genes figure out a way to reproduce at your expense, so much the better for them.
Like mitochondria and chloroplasts, which were separate organisms but evolved to work so close with their hosts that they are now considered part of the same organism.
EDIT Completely rewritten to be hopefully less condescending.
There are lessons from group selection and the extended phenotype which vaguely reduce to “beware thinking about species as organisms”. It is not clear from this essay whether you’ve encountered those ideas. It would be helpful for me reading this essay to know if you have.
Hijacking this thread, has anybody worked through Ape in the coat’s anthropic posts and understood / gotten stuff out of them? It’s something I might want to do sometime in my copious free time but haven’t worked up to it yet.
Sorry, that was an off-the-cuff example I meant to help gesture towards the main idea. I didn’t mean to imply it’s a working instance (it’s not). The idea I’m going for is:
I’m expecting future AIs to be less single LLMs (like Llama) and more loops and search and scaffolding (like o1)
Those AIs will be composed of individual pieces
Maybe we can try making the AI pieces mutually dependent in such a way that it’s a pain to get the AI working at peak performance unless you include the safety pieces
This might be a reason to try to design AI’s to fail-safe and break without controlling units. E.g. before fine-tuning language models to be useful, fine-tune them to not generate useful content without approval tokens generated by a supervisory model.
I suspect experiments with almost-genetically identical twin tests might advance our understanding about almost all genes except sex chromosomes.
Sex chromosomes are independent coin flips with huge effect sizes. That’s amazing! Natural provided us with experiments everywhere! Most alleles are confounded (e.g.. correlated with socioeconomic status for no causal reason) and have very small effect sizes.
Example: Imagine an allele which is common in east asians, uncommon in europeans, and makes people 1.1 mm taller. Even though the allele causally makes people taller, the average height of the people with the allele (mostly asian) would be less than the average height of the people without the allele (mostly European). The +1.1 mm in causal height gain would be drowned out by the ≈-50 mm in Simpson’s paradox. Your almost-twin experiment gives signal where observational regression gives error.
That’s not needed for sex differences. Poor people tend to have poor children. Caucasian people tend to have Caucasian children. Male people do not tend to have male children. It’s pretty easy to extract signal about sex differences.
(far from my area of expertise)
The player can force a strategy where they win 2⁄3 of the time (guess a door and never switch). The player never needs to accept worse
The host can force a strategy where the player loses 1⁄3 of the time (never let the player switch). The host never needs to accept worse.
Therefore, the equilibrium has 2⁄3 win for the player. The player can block this number from going lower and the host can block this number from going higher.
I want to love this metaphor but don’t get it at all. Religious freedom isn’t a narrow valley; it’s an enormous Shelling hyperplane. 85% of people are religious, but no majority is Christian or Hindu or Kuvah’magh or Kraẞël or Ŧ̈ř̈ȧ̈ӎ͛ṽ̥ŧ̊ħ or Sisters of the Screaming Nightshroud of Ɀ̈ӊ͢Ṩ͎̈Ⱦ̸Ḥ̛͑.. These religions don’t agree on many things, but they all pull for freedom of religion over the crazy *#%! the other religions want.
Suppose there were some gears in physics we weren’t smart enough to understand at all. What would that look like to us?
It would look like phenomena that appears intrinsically random, wouldn’t it? Like imagine there were a simple rule about the spin of electrons that we just. don’t. get. Instead noticing the simple pattern (“Electrons are up if the number of Planck timesteps since the beginning of the universe is a multiple of 3”), we’d only be able to figure out statistical rules of thumb for our measurements (“we measure electrons as up 1⁄3 of the time”).
My intuitions conflict here. One the one hand, I totally expect there to be phenomena in physics we just don’t get. On the other hand, the research programs you might undertake under those conditions (collect phenomena which appear intrinsically random and search for patterns) feel like crackpottery.
Maybe I should put more weight on superdetermism.
Humans are computationally bounded, Bayes is not. In an ideal Bayesian perspective:
Your prior must include all possible theories a priori. Before you opened your eyes as a baby, you put some probability of being in a universe with Quantum Field Theory with gauge symmetry and updated from there.
Your update with unbounded computation. There’s not such thing as proofs, since all poofs are tautological.
Humans are computationally bounded and can’t think this way.
(riffing)
“Ideas” find paradigms for modeling the universe that may be profitable to track under limited computation. Maybe you could understand fluid behavior better if you kept track of temperature, or understand biology better if you keep track of vital force. With a bayesian-lite perspective, they kinda give you a prior and places to look where your beliefs are “mailable”.
“Proofs” (and evidence) are the justifications for answers. With a bayesian-lite perspective, they kinda give you conditional probabilities.
“Answers” are useful because they can become precomputed, reified, cached beliefs with high credence inertial you can treat as approximately atomic. In a tabletop physics experiment, you can ignore how your apparatus will gravitationally move the earth (and the details of the composition of the earth). Similarly, you can ignore how the tabletop physics experiment will move you belief about the conservation of energy (and the details of why your credences about the conservation of energy are what they are).
Statements made to the media pass through an extremely lossy compression channel, then are coarse-grained, and then turned into speech acts.
That lossy channel has maybe one bit of capacity on the EA thing. You can turn on a bit that says “your opinions about AI risk should cluster with your opinions about Effective Altruists”, or not. You don’t get more nuance than that.[1]
If you have to choose between outputting the more informative speech act[2] and saying something literally true, it’s more cooperative to get the output speech act correct.
(This is different from the supreme court case, where I would agree with you)
I’m not sure you could make the other side of the channel say “Dan Hendrycks is EA adjacent but that’s not particularly necessary for his argument” even if you spent your whole bandwidth budget trying to explain that one message.
See Grice’s Maxims
If someone wants to distance themselves from a group, I don’t think you should make a fuss about it. Guilt by association is the rule in PR and that’s terrible. If someone doesn’t want to be publicly coupled, don’t couple them.
I think the classic answer to the “Ozma Problem” (how to communicate to far-away aliens what earthlings mean by right and left) is the Wu experiment. Electromagnetism and the strong nuclear force aren’t handed, but the weak nuclear force is handed. Left-handed electrons participate in weak nuclear force interactions but right-handed electrons are invisible to weak interactions[1].
(amateur, others can correct me)
Like electrons, right-handed neutrinos are also invisible to weak interactions. Unlike electrons, neutrinos are also invisible to the other forces*[2]. So the standard model basically predicts there should invisible particles wizzing around everywhere that we have no way to detect or confirm exist at all.
Besides gravity
Can you symmetrically put the atoms into that entangled state? You both agree on the charge of electrons (you aren’t antimatter annihilating), so you can get a pair of atoms into |↑,↑⟩, but can you get the entangled pair to point in opposite directions along the plane of the mirror?
Edit Wait, I did that wrong, didn’t I? You don’t make a spin up atom by putting it next to a particle accelerator sending electrons up. You make a spin up atom by putting it next to electrons you accelerate in circles, moving the electrons in the direction your fingers point when a (real) right thumb is pointing up. So one of you will make a spin-up atom and the other will make a spin-down atom.
No, that’s a very different problem. The matrix overlords are Laplace’s demon, with god-like omniscience about the present and past. The matrix overlords know the position and momentum of every molecule in my cup of tea. They can look up the microstate of any time in the past, for free.
The future AI is not Laplace’s demon. The AI is informationally bounded. It knows the temperature of my tea, but not the position and momentum of every molecule. Any uncertainties it has about the state of my tea will increase exponentially when trying to predict into the future or retrodict into the past. Figuring out which water molecules in my tea came from the kettle and which came from the milk is very hard, harder than figuring out which key encrypted a cypher-text.
Conjunction Fallacy. Adding detail make ideas feel more realistic, and strictly less likely to be true.
Virtues for communication and thought can be diametrically opposed.