disclaimer
This might be the least disclamatory disclaimer I’ve ever read.
I’d even call it a claimer.
disclaimer
This might be the least disclamatory disclaimer I’ve ever read.
I’d even call it a claimer.
Anthropics seem very important here; most laws of physics probably don’t form people; especially people who make cameras, and then AGI, then give it only a few images which don’t look very optimized, or like they’re of a much optimized world.
A limit on speed can be deduced; if intelligence enough to make AGI is possible, probably coordination’s already taken over the universe and made it to something’s liking, unless it’s slow for some reason. The AI has probably been designed quite inefficiently; not what you’d expect from intelligent design.
I could see how an AI might deduce that “objects” exist, and that they exist in three dimensions, from 2 images where the apple has slightly rotated.
I’m pretty sure this one’s deducible from one image; the apple has lots of shadows and refraction. The indentations have lighting the other way.
It could find that the light source is very far above and a few degrees in width, and therefore very large, along with some lesser light from the upper 180°. The apple is falling; universal laws are very common; the sun is falling. The water on the apple shows refraction; this explains the sky (this probably all takes place in a fluid; air resistance, wind).
The apple is falling, and the grass seems affected by gravity too; why isn’t the grass falling the same way? It is.
The grass is pointing up, but all level with other grass; probably the upper part of the ground is affected by gravity, so it flattens.
The camera is aligned almost exactly with the direction the apple is falling.
In three frames, maybe it could see grass bounce off each other? It could at least see elasticity. I don’t know much of the laws of motion it could find from this, but probably not none. Angular movement of the apple also seems important.
Light is very fast; laws of motion; light is very light (not massive) because it goes fast but doesn’t visibly move things.
A superintelligence could probably get farther than this; very large, bright, far up object is probably evidence of attraction as well.
Simulation doesn’t make this much harder; the models for apple and grass came from somewhere. Occam’s Razor is weakened, not strengthened, because simulators have strong computational constraints, probably not many orders of magnitude beyond what the AI was given to think with.
Thank you.
To get more comfortable with this formalism, we will translate three important voting criteria.
You translated four criteria.
Was over two years ago.
taught
Should be ‘taut’.
Adult IQ scores do too, I think.
You might need a very strong superintelligence, or one with a lot of time. But I think the correct hypothesis has extremely high evidence compared to others, and isn’t that complicated. If it has enough thought to locate the hypothesis, it has enough to find it’s better than almost any other.
Newtonian Mechanics or something a bit closer would rise very near the top of the list. It’s possible even the most likely possibilities wouldn’t be given much probability, but it would at least be somewhat modal. [Is there a continuous analogue for the mode? I don’t know what softmax is.]
Thank you for the question. I understand better, now.
Physics also tends toward very uninteresting things. This is for similar reasons, right?
we all know people who insist that they are ugly and stupid and unlikeable even though they don’t seem any worse off than anyone else.
Newcomb’s Problem contains a capital N, and I feel sadness.
I’ve never heard the US civil war described this way.
Thank you.
Thank you. I was probably wrong.
In most examples, there’s no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops.
If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they’ve seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.
I think I don’t understand what you mean. What’s Aumann agreement? How’s it a useful concept?
I thought the surprising thing about Aumann agreement was that ideal agents with shared priors will come to agree even if they can’t intentionally exchange information, and can see only the other’s assigned probability. [I checked Wikipedia; with common knowledge of each other’s probabilistic belief about something, ideal agents with shared priors have the same belief. There’s something about dialogues, but Aumann didn’t prove that. I was wrong.]
Your post seems mostly about exchange of information. It doesn’t matter which order you find your evidence, so ideal agents with shared priors that can exchange everything they’ve seen will always come to agree.
I don’t think this requires understanding Aumann’s theorem.
Is this wrong, or otherwise unimportant?
Thank you.
Scott Alexander wrote some rationalish music a decade ago.
youtube.com/qraikoth
CronoDAS has uploaded a song, though it’s not much rationalist.
youtube.com/CronoDAS
Here, I’d plot difference from gravitation at sea level.
=
Should be ‘≠’.
Scott Alexander wrote some music a decade ago.
youtube.com/qraikoth
“Mary’s Room” and “Somewhere Prior To The Rainbow” are most likely to make you cry again.
“Mathematical Pirate Shanty”, if you can cry laughing.