Nature uses only the longest thread to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry.
Feynman
Nature uses only the longest thread to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry.
Feynman
I’ll try to be there I think
I’m bad at this.
Oh well here goes.
Hi there! I’m Erik. I’m 20 years old.
I am a pure math major at the University of Waterloo. I am half way through my third year here.
That being said, I am largely an autodidact, which I gather is pretty common around these parts. Up until age 13 or so I was primarily interested in physics. In the course of trying to learn physics, I inevitably had to learn some math. So I did, and I never looked back. I can actually pinpoint the exact moment, all those years ago, when I became sure that I would spend the rest of my life doing math. But I won’t bore you with such an excessively personal story.
My mathematical interests are fairly broad. My single greatest fear is that I will probably have to specialize at some point, to learn truly focus on one subject area; To think that I could ever actively decide not to want to learn all the things. I plan to delay this for as long as possible.
I tend to lean towards what I consider to be a pragmatic form of ultrafinitism. Other mathematicians tend to punch me when I talk about that though. A favourite pet problem of mine is to try to work how to recover things like eg real analysis without having to talk about infinity. That’s a pretty tame example, but try doing this for all the math you know and it gets pretty interesting!
I also have a few interests outside of math and physics.
I like anime; A few of my recent favourites include Redline, Mahou Shojou Madoka Magica and Nichijou, all from this past year.
I like video games. My usual approach here is to play a few games very deeply. My all time favourite game is Super Smash Bros Melee, which still has an amazing competitive scene today. I am also a big fan of, and occasional participant in, TASing. I used to speedrun Super Metroid a lot, and I started working on TASing it back in 07 for a while. That proved to be too tedious for me though, so I mostly just watch the runs these days.
I listen to a pretty broad range of music as well. I’ve tried learning to play both piano and guitar, but never got past the “embarrassingly bad” stage.
In terms of rationalist origin story… Uhh not much interesting really to say here. My parents aren’t religious, so I never had that influence. And I’ve been surrounded by and versed in physics and the sciences more generally for literally as long as I can remember. I have an old habit of periodically taking a piece of knowledge that I catch myself taking for granted and forcing myself work out exactly why I know that thing. An easy example: How do you know how far away the sun is? Or a little trickier: How do you know that everything is made out of atoms, and how do you know how small they are? I think I formed this habit because it saved me from having to ever remember very much; I figured out pretty early on that keeping my belief web as connected as possible would save me a lot of effort. I think this is also related to my fear of specialization.
I had a brief period when I was very vocal about atheism. I got tired of that pretty quickly though. For the most part the community just seemed pretty boring: Yep. We still don’t believe in God. GO TEAM.
LW stands out as something special though. It’s not just a lot of people who also don’t believe in silly nonsense. It’s not just about bring everyone up to some baseline of sanity. It’s about striving for an as-of-yet unimagined level of rationality. That’s just awesome and I want to be a part of it.
The paradoxical decomposition of F2 only gives a decomposition for a dense subset of the sphere, because you have to throw away the (countably many) fixed points of all the rotations involved to make the correspondence between F2 and the orbits of various points. To go the rest of the way and you need to use something other than rotations about the origin, ie something more than just the action of F2. But it’s certainly fair to say that Banach-Tarski works because of the structure of F2.
The way I always did it was to use rotations about some fixed line that doesn’t go through 0.
Ahh yes, you’re right.
I am signed up.
At best, this will produce cleverly efficient solutions to your sample problems.
I’m probably going to watch this this weekend. Looks pretty fun.
I am putting a printout of this chart on my desk until I have it internalized. No more fumbling around trying to do numerical updates in conversation in real time.
Because it’s so obvious that it doesn’t require further examination. (Of course this is wrong and it does, but he hasn’t figured that out yet.)
I’m not saying the apparent object level claim (ie intelligence implies benevolence) is wrong. Just that it does in fact require further examination. Whereas here it looks like an invisible background assumption.
Did my phrasing not make it clear that this is what I meant, or did you interpret me as I intended and still think it sounds condescending?
No need to apologize. It’s clear in hindsight that I made a poor choice of words.
The standard answer is there is such a strong “first mover advantage” for self-improving AIs that it only matters which comes first: If an FAI comes first, it would be enough to stop the creation of uFAI’s (and also vice versa). This is addressed at some length in Eliezer’s paper Artificial Intelligence as a Positive and Negative Factor in Global Risk.
I don’t find this answer totally satisfying. It seems like an awfully detailed prediction to make in absence of a technical theory of AGI.
Ahh, that makes a lot more sense.
Two suggestions, sort of on opposite ends of the spectrum.
First: Practice doing “contest style” math problems. This helps your general math skills, and also helps get you used to thinking creatively and learning to gain some confidence in exploring your good ideas to their limit, while also encouraging you to quickly relinquish lousy approaches.
Second: Exercise. A lot. Whether or not you’re already in good shape, you will almost inevitably find it hard to keep a healthy exercise routine when starting in college. So start building some good habits right away.
If I am allowed to use only exponentially more computing power than you (are far cry from a halting oracle), then I can produce outputs that you cannot distinguish from a halting oracle.
Consider the following program: Take some program P as input, and search over all proofs of length at most N, in some formal system that can describe the behaviour of arbitrary programs (ie first order PA) for a proof that P either does or does not halt. If you find a proof one way or the other, return that answer. Otherwise, return HALT.
This will return the correct answer for all programs of which halt in less than (some constant multiple of) N, since actually running the program until it halts provides a proof of halting. But it also gives the correct answer for a lot of other cases: for example there is a very short proof that “While true print 1”.
Now, if I am allowed exponentially more computing power than you, then I can run this program with N equal to the number of computations that you are allowed to expend. In particular, any program that you query me on, I will either answer correctly, or give a false answer that you won’t be able to call me out on.
The Kolmogorov complexity of an uncomputable sequence is infinite, so Solomonoff induction assigns it a probability of zero, but there’s always a computable number with less than epsilon error, so would this ever actually matter?
Can you re-phrase this please? I don’t understand what you are asking.
Did you mean to write “for all programs that halt in less than (some constant multiple of) N steps”, because what you wrote doesn’t make sense.
Yes. Edited.
What if I give you a program that enumerates all proofs under PA and halts if it ever finds proof for a contradiction? There is no proof under PA that this program doesn’t halt, so your fake oracle will return HALT, and then I will have reasonable belief that your oracle is fake.
That’s cool. Can you do something similar if I change my program to output NOT-HALT when it doesn’t find a proof?
Okay, I concede. I recognize when I’ve been diagonalized.
Is this ongoing? I also live in waterloo and would potentially be interested.