Cognition → Convergence → Corroboration
Now they’ve written the post on this.
https://www.lesswrong.com/posts/fEvCxNte6FKSRNFvN/3c-s-a-recipe-for-mathing-concepts
Cognition → Convergence → Corroboration
Now they’ve written the post on this.
https://www.lesswrong.com/posts/fEvCxNte6FKSRNFvN/3c-s-a-recipe-for-mathing-concepts
Hyperidealized art wouldn’t be banned. There’d be much less of it, but not none.
It’d also be produced by much better artists.
I think you’d probably end up consuming hyperidealized art, too.
You’d notice that you preferred the more idealized art, among what you consumed, then you’d talk to a psychologist or something and they’d tell you that you’d probably be fine with the cognitohazardous stuff.
Why has my comment been given so much karma?
To get more comfortable with this formalism, we will translate three important voting criteria.
You translated four criteria.
Scott Alexander wrote some rationalish music a decade ago.
youtube.com/qraikoth
CronoDAS has uploaded a song, though it’s not much rationalist.
youtube.com/CronoDAS
Was over two years ago.
Scott Alexander wrote some music a decade ago.
“Mary’s Room” and “Somewhere Prior To The Rainbow” are most likely to make you cry again.
“Mathematical Pirate Shanty”, if you can cry laughing.
Here, I’d plot difference from gravitation at sea level.
I’ve never heard the US civil war described this way.
Thank you.
=
Should be ‘≠’.
taught
Should be ‘taut’.
I’ve learned the maths before.
I think maybe I have no idea what kinetic energy is.
kinetic energy scales with the square of the speed
Why is this?
Ideally you’d try to have a separate bakery with reversed gender-roles.
I probably can’t go to the October meetup, due to coincidence. How do I unRSVP on Meetup?
Unrelated, I still think I have a good chance of making it next time.
Thank you. I was probably wrong.
In most examples, there’s no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops.
If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they’ve seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.
I think I don’t understand what you mean. What’s Aumann agreement? How’s it a useful concept?
I thought the surprising thing about Aumann agreement was that ideal agents with shared priors will come to agree even if they can’t intentionally exchange information, and can see only the other’s assigned probability. [I checked Wikipedia; with common knowledge of each other’s probabilistic belief about something, ideal agents with shared priors have the same belief. There’s something about dialogues, but Aumann didn’t prove that. I was wrong.]
Your post seems mostly about exchange of information. It doesn’t matter which order you find your evidence, so ideal agents with shared priors that can exchange everything they’ve seen will always come to agree.
I don’t think this requires understanding Aumann’s theorem.
Is this wrong, or otherwise unimportant?
Thank you for responding.
It’s possible for your team to lose five points, thereby giving the other team five points.
If the other team loses five points, then you gain five points.
Why is it not possible for the other team to lose five points without anything else happening? Where does the asymmetry come from?
It’s
-25 −20 −5 0 20 25.
Why isn’t it
-25 −20 −5 0 5 20 25?
(-25) lose points and other team gains points
(-20) other team gains points
(-5) lose points and other team gets nothing
(0) nobody gets anything
(20) gain points
(25) other team loses points and you gain points
Why no (+5)?
Thank you.