Hmmm, am I doing something wrong?
What I see when I click the three dots on a page:
I think I have enough karma, but I can’t figure out where the nomination button is. Could someone share a screenshot?
I had this happen to me as well. Firefox 70 on Ubuntu 18.04
Also, anecdotally, there have been lots of Indian applicants (and attendees) at ESPR throughout the years. Seems like people there also think rationality is cool (lots of the people I interviewed had read HPMOR, there are LW meetups there, etc. etc.)
Thought as I worked through the exercise:
Is there something I’m missing? It seems like TurnTrout’s already given us all the pieces. Seems like we can say that “Something has high impact to someone if it either affects something they value (the personal side) or affects their ability to do things more broadly (the objective side).”
Something is a big deal if it affects our ability to take future actions? (That seems to be the deal about objectively being bad.)
Is the point here to unify it into one sort of coherent notion?
Okay, so let’s back up for a second and try to do all of this from scratch...When I think about what “impact” feels like to me, I imagine something big, like the world exploding.
But it doesn’t necessarily have to be a big change. A world where everyone has one less finger doesn’t seem to be a big change, but it seems to be high impact. Or a world where the button that launches nukes is pressed rather than not pressed. Maybe we need to look some more into the future? (Do we need discounting? Maybe if nukes get launched in the far future, it’s not that bad?)
I think it’s important to think relative to the agent in question, in order to think about impact. You also want to look at what changed. Small changes aren’t necessarily low impact, but I think large changes will correspond to high impact.
It seems like “A change has has high impact if the agent’s valuation of the after state is very different than their valuation of the current state” is the best I have after 15 minutes...
Michael Sipser’s Introduction to the Theory of Computation goes over the recursion theorem and Rice’s theorem, IIRC. The proofs are given as well as associated exercises. The textbook walks you through, from DFAs to Turing Machines, so it’s pretty self-contained, if you’re looking at a source other than Computability and Logic to understand them.
One thing here that seems important to note is what each medium does to your attention and what sort of cognitive work it facilitates:
To borrow a few items from your list:
Videogames: literally a Skinner box that gives you reinforcement to keep doing the thing.
Web surfing / news feeds / blogs / movies: makes you a passive consumer of the content.
Direct messaging: requires you to spend time thinking about your response.
Writing software / making videos / drawing comics: puts you in a position to think about the message you want to convey, teaching to others requires you to bridge inferential gaps, look at your models.
Spaced repetition: literally designed to make you remember stuff.
Thanks for trying these out, Ben!
If you ever are interested in learning close-up magic some more, I have lots more thoughts on what good resources are for learning / have strong opinions on what makes a good magic effect. I haven’t written about them for the LW audience, but maybe more of this hybrid stuff will manifest later on.
The toy example you gave seems like something that would make for a fun simulation ala Nicky Case’s stuff, you can try with multiple groups, different types of evidence (which support either side in varying amounts), and different coordination mechanisms.
I’ll look into something this weekend. If anyone else likes doing JS development, ping me, and we can figure something out!
There’s jquery UI which maybe counts?
Ben Pace has a new post up on LessWrong that’s asking about good exercises for rationality / general LW-adjacent stuff. I think this is a good thing to put up a bounty for, and I started thinking about what makes a good exercise. Exercises are good because they help you further the develop the material; they give you opportunities to put whatever relevant skill to use.
There are differing levels of what you can be trying to assess:
Identifying the correct idea from a group of different ones
Summarizing the correct idea
Transferring the idea to someone else
Actually demonstrating whatever skill it is (if it’s something you can do)
Actually using the skill to deduce something else (if it’s a model thing)
I think there’s a good set of stuff to dive into here about the distinction between optimizing for pedagogy versus effectiveness. In the most stark case, you want to teach people using less potent versions of something, at least at first. Think not just training wheels on a bike, but successively more advanced models for physics or arithmetic. There’s a gradual shift happening.
More than that, I wonder if the two angles are greatly orthogonal.
Anyway, back to the original idea at hand. When you give people exercises, there’s a sense of broad vs narrow that seems important, but I’m still teasing it out. In one sense, you can think of tests that do multiple choice vs open-ended answers. But it’s not like multiple-choice questions have to suck. You could give people very plausible-sounding answers which require them to do a lot of work to determine which one is correct. Similarly, open-ended questions could allow for bullshitting.
It’s not exactly the format, but what sort of work it induces.
At the very least, it’s about pushing for more Generative content. But beyond that, it gets into pedagogy questions:
How can you give questions which increase in difficulty?
What does difficulty correspond to? If something is “hard to figure out”, what is that quality referring to?
If you give open-ended questions, how can you assess the answers you get?
How much of this is covered already by the teaching literature?
I recently wrote about three things you can try with cards to see what your internal calibration feels like. They have some question prompts, but the gist of it is something to do, rather than something with a direct answer.
I see! Thanks for the breakdown for where the pain points are when it comes to performance. Really appreciate the openness into where things could have gone better / what’s happening right now!
Oh, wow! I didn’t realize that could have been tripping things up. Thank you for the formatting help!
The code block editor wasn’t very friendly and ate up all of my tabs. I’m working on better formatting, and this’ll probably end up being a post on my own blog later on, which will hopefully also have things like syntax highlighting.
For sure! To be honest, I got a little lost reading your 3-part series here, so I think I’ll revisit it later on.
I’m newer to deep learning, so I think my goals are similar to yours (e.g. writing it up so I have a better understanding of what’s going on), but I’m still hashing out the more introductory stuff.
I’ll definitely link it here after I finish!