Ben Pace has a new post up on LessWrong that’s asking about good exercises for rationality / general LW-adjacent stuff. I think this is a good thing to put up a bounty for, and I started thinking about what makes a good exercise. Exercises are good because they help you further the develop the material; they give you opportunities to put whatever relevant skill to use.
There are differing levels of what you can be trying to assess:
Identifying the correct idea from a group of different ones
Summarizing the correct idea
Transferring the idea to someone else
Actually demonstrating whatever skill it is (if it’s something you can do)
Actually using the skill to deduce something else (if it’s a model thing)
I think there’s a good set of stuff to dive into here about the distinction between optimizing for pedagogy versus effectiveness. In the most stark case, you want to teach people using less potent versions of something, at least at first. Think not just training wheels on a bike, but successively more advanced models for physics or arithmetic. There’s a gradual shift happening.
More than that, I wonder if the two angles are greatly orthogonal.
Anyway, back to the original idea at hand. When you give people exercises, there’s a sense of broad vs narrow that seems important, but I’m still teasing it out. In one sense, you can think of tests that do multiple choice vs open-ended answers. But it’s not like multiple-choice questions have to suck. You could give people very plausible-sounding answers which require them to do a lot of work to determine which one is correct. Similarly, open-ended questions could allow for bullshitting.
It’s not exactly the format, but what sort of work it induces.
At the very least, it’s about pushing for more Generative content. But beyond that, it gets into pedagogy questions:
How can you give questions which increase in difficulty?
What does difficulty correspond to? If something is “hard to figure out”, what is that quality referring to?
If you give open-ended questions, how can you assess the answers you get?
How much of this is covered already by the teaching literature?
Short aside on training wheels: balance bikes are a thing that I only learned about this past year as my niece was using one to learn how to ride a bike. Their claim is that steering and balance are the hard parts of learning to ride a bike, not pedaling and breaking. This seems “Duh!” obvious in retrospect.
So it seems like it training wheels try to get rid of having to balance at all, and have you focus on things that aren’t super essential, and then force you to make a big scary jump. Balance bikes start you with balance in a situation where that’s all there is to focus on, and once you have that, adding pedals is easy (my niece was able to ride her actual bike on the first shot).
I’m enjoying the irony that training wheels, the literal go to metaphor concerning assisted learning phases, are a bad example of it. I wonder what else might be similar.
Even now, I still don’t think I like the LW redesign, mostly because of speed and aesthetics reasons. I know stuff is in place to speed up the site, so I guess that’s a work in progress. The grey on grey for text, though, feels like it’s way too long contrast; there’s something else aesthetically going on where because everything is the same shade of light grey, nothing feels like it has “weight”, and the focus isn’t fully on the content either because of how it all doesn’t seem to pop out.
For me, all of the nifty new features like sequences, meetup pages, and shortform feeds feel like they’re missing the point. If the site feels slow and doesn’t seem to have the visual affordances, I don’t feel compelled to participate as fully, regardless of what else I can do on the site.
I’m glad greaterwrong exists because it addresses both of these issues, but I’m curious if these are turn-offs for other people.
So I’m not 100% sure which part of the site are most problematic from your perspective, but I’m curious if this feels like it moves the needle in the right direction:
(relatedly: what percentage of posts have you read, and correspondingly, in the current site frontpage, what percentage of posts appear as a grayed-out “already read”)
Does it feel more like you’d want the LW-styling smashed-and-rebuilt-from-scratch, or are there particular incremental changes that would accomplish disproportionate value?
I’m going to spend some of the winter holidays working on Abu-Mostafa et al’s Learning From Data’s problem set. I think this should be fun, and I’ll also look into learning Observable for some interactive notebooks for the coding problems.
A few of us have been using Observable for some Foretold stuff. I think it’s pretty good; almost definitely the best of that type of thing (a js notebook), but be prepared for some possible bugs. If you have issues, feel free to reach out. Also, the founders sometimes give feedback on things.
I checked out Observable some more. I think it might actually be a little heavier then what I want. Unsure if I’ll do the coding exercises beforehand (and just post the results + code), or if I’ll go through the work of setting up an interactive notebook so readers can follow along.
I looked into self-hosting it because it seems the default option is creating a notebook hosted on their site. My understanding is that there’s a way to embed notebooks onto my own sites (or the runtime environment is open-sourced?)
All the notebooks we made were just hosted by them; we didn’t need to self-host.
I believe you can embed notebooks onto your own site; I know there’s a way to do so via downloading the code, not sure about an iframe-style setting. I imagine they’ve thought about such situations a lot, it seems important for them.
Finally finished up polishing old posts in my series on instrumental rationality. Didn’t cross-post it to LW because much of the stuff is cannibalized, but the link is here https://mlu.red/. Posts are meant to be read sequentially, but I haven’t added “next post” functionality yet.
Sometimes I ask myself: “A bunch of cool stuff seems to be happening in the present. So why can’t I move faster and let these things in? Why do I feel stuck by past things?”
Well, experience compounds. One reason childhood events can be so influential isn’t just that they happened when you were at a formative time and developing your models. In addition, the fact that you pick them up early means they’ve had the privilege of being part of your thought processes for longer. They’re more well-worn tools.
Then, there’s also the default answer that each additional year of your life is, relative to the amount of years you’ve lived, a lesser amount. EX: From year 6 to 7, you’ve gained an extra ~15% of your total lifespan in new experiences. Whereas from 26 to 27, you’ve gained closer to 4% of your total lifespan in new experiences.
But, I’d like every year to be measured more equally with one another. I feel like cool stuff is passing by me right now, and I’m just slow on the uptake. I’m not taking it in!
Yes, you can get set in your older ways of thinking, and you will have seen more with each successive year. But experientially speaking I’d like to get my brain to also pay more attention to the recent stuff.
I guess one hacky way to do this would be to spend more time ruminating on the present (which is also harder because if you’ve lived for 30 years, then by the same proportionality argument, there’s just less stuff to think about if you restrict yourself to years 29-30).
I’m confused because there is also:
Experience as a Sliding Window:
There’s some sort of cutoff point where I might be able to recall things, but it no longer feels “recent” or directly connected to my identity.
The feeling of recency is quite interesting to me because it seems to imply that important things are going to fade over time. And if you want to preserve certain parts of your identity, there’s some sort of “upkeep” you’ll need to pay, i.e. having more of those sort of experiences consistently so they stay in recent memory.
Anyway, that’s if you equate identity with memory, and that’s definitely an oversimplification. But, whatever.
As new things filter in, older things drop out. I’m unsure how to square this with the theory of compounding experience. Presumably if something has effects, even if it falls out of the window, then things it influenced can continue to resound, ala domino effect, but that feels quite contrived. The obvious answer, of course, is that there are several factors at play.
One common theme that I return to, time and time again, is that of addictiveness. More specifically, what makes something habit-forming in a bad way? I’ve previously talked about this in the context of Attractors. Lately, my thing to hate on is mobile games, or the thing that they represent. Which, yes, is a little late to the game. And I don’t even play games on my mobile phone, so it seems a little out of place.
But I digress. The point here is to talk about the Skinner Box. Or, the application of the same concept to human things. Gamification and notification spam both fall into this category. But maybe not games. But maybe some games. Definitely mobile games. The point here is that there’s this category I want to get some clarity on, and it’s about these things which seem habit-forming and suck you in.
So, what’s clearly a Skinner Box? I think that clicker games are totally Skinner Boxes. Also Clash of Clans, Farmville (i.e. everything Zynga / Zynga-clones). But this line is often hazy; Candy Box was innovative and exciting in certain ways. There was a game a while back about alpacas eating one another that seemed surprisingly deep for an idle game. It’s one thing to put on a sophisticated veneer on a game, but it still seems fine to critique the underlying mechanics.
What does make a Skinner Box?
Lack of a challenge
Despite having progression, idle and clicker games don’t really have anything that forces the player to do anything strategic. They just...click things, and they get reinforcement.
Instant gratification
Mobile games often leverage this desire by time-locking content, prompting you to pay in order to get something now. The other thing to pay attention here is if the feedback loop is tight.
Incentives to keep going?
Intermittent rewards / reward schedules
What doesn’t make a Skinner Box?
Skill and growth
The more something is like an instrument or a sport, the less it seems like a Skinner Box. Although the many casual LoL players seem to indicate that even something which has a high skill cap can still be addictive.
Meaning
The more you invoke artistic purpose, narrative, or some other agenda, we seem to be a lot more forgiving about the actual mechanics involved.
Instrumentality
When we’re hungry, we eat and eat and eat. And no one bats an eye. The same thing with sleep. Stuff that’s useful isn’t often seen as dangerous.
Sometimes there is more than one way to play a game. For example, I spent a few weeks playing Farmville. I had a spreadsheet with production options, so I could easily choose the best ones. I wrote an AutoIt script that clicked my fields, so instead of clicking 100 times to harvest my fields, I only clicked a button to start the script and left it running for a minute or two. I focused on those types of production that I could automate using the script. So I believe I enjoyed the game on a higher level than usual.
But it still took a lot of time to run the script regularly. And at some moment I ran out of options: the requirements to reach the next level were increasing exponentially, my production capacities linearly. Somewhere around level 100, even using the best available options, it would take a few days of doing exactly the same thing over and over again to reach level 101, and then even more days of the same thing to reach level 102, and it would only keep getting worse. The game was designed so that it was impossible to get more than 2 XP per 1 mouse click; and even with my script it meant at most 200 XP per running the script. My strategy brought me lots of gold and other in-game currency, but it was impossible to trade any of them for XP in a way that didn’t require at least 1 mouse click per 2 XP. And XP was the only way to get higher level and potentially unlock new items. So at this moment the game became pointless.
What I don’t like about most online games is that they are open-ended. There is no incentive for the game author to ever tell you “YOU WON, GAME OVER”. The only ending is that at some moment you become bored and quit; it can happen sooner or later, but it’s the only way the game can end.
It feels like there’s been a push towards getting people to start creating their own content. Platforms like YouTube + the Internet make it a lot easier for people to start.
Growing an audience, though, seems hard because there’s not often a lot of free attention. Most of the competition is zero-sum between different content. People only have so much free time, so minutes they spend engaging with your stuff is minutes they don’t spend engaging in other people’s stuff.
There’s a cynical viewpoint here which is something like “If you don’t think you’re creating Good Content, don’t broadcast it! We have enough low-quality stuff as it is, out there.”
I think people often want to create, though. It’s one of the default responses people have if you ask them “Say you could live comfortably without needing to work. What would you do then?” (“Well, I’d write. Or I’d learn to play an instrument...”)
Often, though, implementation takes far more time than coming up with the initial idea. There is an asymmetry across many fields where the actual ideation is done by only a small group of people. This then requires maybe 10X as many people to actually put into practice. (EX: the people who design the look/feel of a piece of software at a company vs those who build it.)
Thus, if you want people to join your project (which is of course great because you came up with it), you’ll need to convince other people to go with you. On the flip side, I think there’s a skill worth practicing where you let go of idea ownership. Stuff is going to get done, and you’re going to be doing it; whoever came up with the idea might be less important than whether or not you want the stuff to happen.
But maybe the desire for individual ideation points to something important. A really large amount of people seem to want to partake in creative endeavors.
Years ago, I participated at making open-source computer games. My experience suggests that if you want people to join your project, you should show them that you are able to do the project alone, if necessary.
You should make, as soon as possible, a version 0.01 of the game, containing a playable first level with ugly graphics. That demonstrates that given enough time, you would be able to complete the game (you could complete the code, and you could hire someone to do graphics and music). Thus people who decide about joining you have a signal that their contribution in this project will be meaningful: that likely one day their work will be a part of a complete product. (Another way to achieve the same outcome is to already have another project complete, and make that known. But you can’t do this with your first project, obviously.)
For the contributor, it is a sad experience if they spend time and energy on your project, and it never gets completed. People who got burned like this will be looking for red flags. “All talk and no code” is pretty bad, even if you have detailed plans and tons of beautiful pictures or 3D models. (I knew a group of people who spent the first year doing detailed 3D models containing thousands of polygons, without writing a single line of code. The models were truly beautiful. But after the year, when they started writing code, they found out that actually the hardware existing at that moment was unable to draw one such model in real time… and they planned to have hundreds. The project was never finished. You want to find out this kind of bad news as soon as possible.)
I think this generalizes outside of computer games like this: convince people that the project would eventually get completed even without their help, preferably by doing a smaller version of what you want to do. If you want to write a book, write the first chapter. If you want to draw a comic, draw a few characters and then an example page or two of the story. If you want to organize a summer camp, organize a small party...
And I completely agree that sometimes (actually, quite often) the right thing to do is to join someone else’s project. But then you need to examine the red flags.
But maybe the desire for individual ideation points to something important. A really large amount of people seem to want to partake in creative endeavors.
Creative work is a signal of many things. Most directly, the skills you are using, but also self-discipline and long-term thinking (that you spent your time learning the skill, instead of e.g. reading social networks in your free time), social skills (if the project requires cooperation and support of other people), and also wealth (the less time and energy you waste doing your daily job, the more you can spend on your hobby). Of course people would be happy to radiate these signals.
But I am afraid it is a zero-sum game. I mean, from the perspective of the creative people competing for the audience. If as a side effect, many cool things are produced, that is a positive externality.
Here’s something that feels like another instance of the deontologist vs consequentialist abstraction, except that the particulars of the situation are what stick out to me: When I choose between doing something sane or something that’s endorsed by an official rule, I’ll more-often-than-I-like opt to do the endorsed thing, even when it’s obviously worse-off for me.
Some examples, of varying quality:
Not jaywalking, even when it’s in a neighborhood or otherwise not-crowded place.
Asking for permission to do obvious things instead of just doing them
Focusing on the literal words that someone initially said, rather than their intent, or if they later recant.
Letting harmful policies happen instead of appealing them.
I’m reminded of that study which showed that people following an evacuation robot were led to stay in a room even when there was a fire, even when the robot was observed to be previously faulty. There’s something about rules that overrides appealing to sanity. I’m a little worried that I bias towards this side compared to just doing the thing that works out better.
There are of course benefits to choosing the official option. The biggest one is that if someone questions your judgment later on, you can appeal to the established rules. That gives you a lot of social backing to lean on.
I think there’s also a weird masochistic aspect of craving pity, of wanting to be in a situation that seems bad by virtue of nature, so I can absolve myself of responsibility. Something about how this used to be a play to secure ourselves more resources, through a pity play?
Malcolm Ocean gets it. There’s a terrible thing that happens when you try to encapsulate your essay with a title. Somehow, the label takes on a life of its own, and you sometimes forget the content inside the essay.
This happens to my own essays where I think “Oh, huh, this essay is called ‘Learning from Past Experiences’”. Sounds kinda boring.
And in fact it was not boring and it was good.
I’m thinking of maybe instead transitioning to just numbers + summaries instead.
For example, a format like: Essay 10 [Fading novelty, ways to address it, and a brief digression into typography.]
I’ve been thinking about interpretable models. If we have some system making decisions for us, it seems good if we can ask it “Why did you suggest action X?” and get back something intelligible.
So I read up about what sorts of things other people have come up with. Something that seemed cool was this idea of tree regularization. The idea being that decision trees are sort of the standard for interpretable models because they typically make splits along features. You essentially train a regularizer (which is a neural net) which proxies average tree length (i.e. the complexity of a decision tree which is comparable to the actual model you’re training). Then, when you’re done, you can train a new decision tree which mimics the final neural net (the one you trained with the regularizer).
The author pointed out that, in the process of doing so, you can see what features the model thinks are relevant. Sometimes they don’t make sense, but the whole point is that you can at least tell that they don’t make sense (from a human perspective) because the model is less opaque. You know more than just “well, it’s a linear combination of the inputs, followed by some nonlinear transformations, repeated a bunch of times”.
But if the features don’t seem to make sense, I’d still like to know why they were selected. If the system tells us “I suggested decision X because of factors A, B, and C” and C seems really surprising to us, I’d like to know what value it’s providing to the prediction.
I’m not sure what sort of justification we could expect from the model, though. Something like “Well, there was this regularity that I observed in all of the data you gave me, concerning factor C,” seems like what’s happening behind the scenes. Maybe that’s a sign for us to investigate more in the world, and the responsibility shouldn’t be on the system. But, still, food for thought.
Ben Pace has a new post up on LessWrong that’s asking about good exercises for rationality / general LW-adjacent stuff. I think this is a good thing to put up a bounty for, and I started thinking about what makes a good exercise. Exercises are good because they help you further the develop the material; they give you opportunities to put whatever relevant skill to use.
There are differing levels of what you can be trying to assess:
Identifying the correct idea from a group of different ones
Summarizing the correct idea
Transferring the idea to someone else
Actually demonstrating whatever skill it is (if it’s something you can do)
Actually using the skill to deduce something else (if it’s a model thing)
I think there’s a good set of stuff to dive into here about the distinction between optimizing for pedagogy versus effectiveness. In the most stark case, you want to teach people using less potent versions of something, at least at first. Think not just training wheels on a bike, but successively more advanced models for physics or arithmetic. There’s a gradual shift happening.
More than that, I wonder if the two angles are greatly orthogonal.
Anyway, back to the original idea at hand. When you give people exercises, there’s a sense of broad vs narrow that seems important, but I’m still teasing it out. In one sense, you can think of tests that do multiple choice vs open-ended answers. But it’s not like multiple-choice questions have to suck. You could give people very plausible-sounding answers which require them to do a lot of work to determine which one is correct. Similarly, open-ended questions could allow for bullshitting.
It’s not exactly the format, but what sort of work it induces.
At the very least, it’s about pushing for more Generative content. But beyond that, it gets into pedagogy questions:
How can you give questions which increase in difficulty?
What does difficulty correspond to? If something is “hard to figure out”, what is that quality referring to?
If you give open-ended questions, how can you assess the answers you get?
How much of this is covered already by the teaching literature?
Short aside on training wheels: balance bikes are a thing that I only learned about this past year as my niece was using one to learn how to ride a bike. Their claim is that steering and balance are the hard parts of learning to ride a bike, not pedaling and breaking. This seems “Duh!” obvious in retrospect.
So it seems like it training wheels try to get rid of having to balance at all, and have you focus on things that aren’t super essential, and then force you to make a big scary jump. Balance bikes start you with balance in a situation where that’s all there is to focus on, and once you have that, adding pedals is easy (my niece was able to ride her actual bike on the first shot).
I’m enjoying the irony that training wheels, the literal go to metaphor concerning assisted learning phases, are a bad example of it. I wonder what else might be similar.
Even now, I still don’t think I like the LW redesign, mostly because of speed and aesthetics reasons. I know stuff is in place to speed up the site, so I guess that’s a work in progress. The grey on grey for text, though, feels like it’s way too long contrast; there’s something else aesthetically going on where because everything is the same shade of light grey, nothing feels like it has “weight”, and the focus isn’t fully on the content either because of how it all doesn’t seem to pop out.
For me, all of the nifty new features like sequences, meetup pages, and shortform feeds feel like they’re missing the point. If the site feels slow and doesn’t seem to have the visual affordances, I don’t feel compelled to participate as fully, regardless of what else I can do on the site.
I’m glad greaterwrong exists because it addresses both of these issues, but I’m curious if these are turn-offs for other people.
So I’m not 100% sure which part of the site are most problematic from your perspective, but I’m curious if this feels like it moves the needle in the right direction:
(relatedly: what percentage of posts have you read, and correspondingly, in the current site frontpage, what percentage of posts appear as a grayed-out “already read”)
Yeah! The screenshot you shared helps. I think most of the frontpage stuff ends up greyed out for me because I click on most things.
Which GreaterWrong styling do you use?
Does it feel more like you’d want the LW-styling smashed-and-rebuilt-from-scratch, or are there particular incremental changes that would accomplish disproportionate value?
I use the GW styling that mimics the old LW. I think that certain things which seem to matter a lot to me are darker borders and higher contrast.
I’m going to spend some of the winter holidays working on Abu-Mostafa et al’s Learning From Data’s problem set. I think this should be fun, and I’ll also look into learning Observable for some interactive notebooks for the coding problems.
A few of us have been using Observable for some Foretold stuff. I think it’s pretty good; almost definitely the best of that type of thing (a js notebook), but be prepared for some possible bugs. If you have issues, feel free to reach out. Also, the founders sometimes give feedback on things.
https://observablehq.com/@oagr https://observablehq.com/@jjj/
The module debugger has been essential for me. https://observablehq.com/@tmcw/module-require-debugger
Thanks for the info, Ozzie!
I checked out Observable some more. I think it might actually be a little heavier then what I want. Unsure if I’ll do the coding exercises beforehand (and just post the results + code), or if I’ll go through the work of setting up an interactive notebook so readers can follow along.
I looked into self-hosting it because it seems the default option is creating a notebook hosted on their site. My understanding is that there’s a way to embed notebooks onto my own sites (or the runtime environment is open-sourced?)
All the notebooks we made were just hosted by them; we didn’t need to self-host.
I believe you can embed notebooks onto your own site; I know there’s a way to do so via downloading the code, not sure about an iframe-style setting. I imagine they’ve thought about such situations a lot, it seems important for them.
Finally finished up polishing old posts in my series on instrumental rationality. Didn’t cross-post it to LW because much of the stuff is cannibalized, but the link is here https://mlu.red/. Posts are meant to be read sequentially, but I haven’t added “next post” functionality yet.
Is the top one meant to be read first or last?
Fading Novelty is the first post, so it’s supposed to be read from top to bottom.
Experience As Compounding:
Sometimes I ask myself: “A bunch of cool stuff seems to be happening in the present. So why can’t I move faster and let these things in? Why do I feel stuck by past things?”
Well, experience compounds. One reason childhood events can be so influential isn’t just that they happened when you were at a formative time and developing your models. In addition, the fact that you pick them up early means they’ve had the privilege of being part of your thought processes for longer. They’re more well-worn tools.
Then, there’s also the default answer that each additional year of your life is, relative to the amount of years you’ve lived, a lesser amount. EX: From year 6 to 7, you’ve gained an extra ~15% of your total lifespan in new experiences. Whereas from 26 to 27, you’ve gained closer to 4% of your total lifespan in new experiences.
But, I’d like every year to be measured more equally with one another. I feel like cool stuff is passing by me right now, and I’m just slow on the uptake. I’m not taking it in!
Yes, you can get set in your older ways of thinking, and you will have seen more with each successive year. But experientially speaking I’d like to get my brain to also pay more attention to the recent stuff.
I guess one hacky way to do this would be to spend more time ruminating on the present (which is also harder because if you’ve lived for 30 years, then by the same proportionality argument, there’s just less stuff to think about if you restrict yourself to years 29-30).
I’m confused because there is also:
Experience as a Sliding Window:
There’s some sort of cutoff point where I might be able to recall things, but it no longer feels “recent” or directly connected to my identity.
The feeling of recency is quite interesting to me because it seems to imply that important things are going to fade over time. And if you want to preserve certain parts of your identity, there’s some sort of “upkeep” you’ll need to pay, i.e. having more of those sort of experiences consistently so they stay in recent memory.
Anyway, that’s if you equate identity with memory, and that’s definitely an oversimplification. But, whatever.
As new things filter in, older things drop out. I’m unsure how to square this with the theory of compounding experience. Presumably if something has effects, even if it falls out of the window, then things it influenced can continue to resound, ala domino effect, but that feels quite contrived. The obvious answer, of course, is that there are several factors at play.
One common theme that I return to, time and time again, is that of addictiveness. More specifically, what makes something habit-forming in a bad way? I’ve previously talked about this in the context of Attractors. Lately, my thing to hate on is mobile games, or the thing that they represent. Which, yes, is a little late to the game. And I don’t even play games on my mobile phone, so it seems a little out of place.
But I digress. The point here is to talk about the Skinner Box. Or, the application of the same concept to human things. Gamification and notification spam both fall into this category. But maybe not games. But maybe some games. Definitely mobile games. The point here is that there’s this category I want to get some clarity on, and it’s about these things which seem habit-forming and suck you in.
So, what’s clearly a Skinner Box? I think that clicker games are totally Skinner Boxes. Also Clash of Clans, Farmville (i.e. everything Zynga / Zynga-clones). But this line is often hazy; Candy Box was innovative and exciting in certain ways. There was a game a while back about alpacas eating one another that seemed surprisingly deep for an idle game. It’s one thing to put on a sophisticated veneer on a game, but it still seems fine to critique the underlying mechanics.
What does make a Skinner Box?
Lack of a challenge
Despite having progression, idle and clicker games don’t really have anything that forces the player to do anything strategic. They just...click things, and they get reinforcement.
Instant gratification
Mobile games often leverage this desire by time-locking content, prompting you to pay in order to get something now. The other thing to pay attention here is if the feedback loop is tight.
Incentives to keep going?
Intermittent rewards / reward schedules
What doesn’t make a Skinner Box?
Skill and growth
The more something is like an instrument or a sport, the less it seems like a Skinner Box. Although the many casual LoL players seem to indicate that even something which has a high skill cap can still be addictive.
Meaning
The more you invoke artistic purpose, narrative, or some other agenda, we seem to be a lot more forgiving about the actual mechanics involved.
Instrumentality
When we’re hungry, we eat and eat and eat. And no one bats an eye. The same thing with sleep. Stuff that’s useful isn’t often seen as dangerous.
Sometimes there is more than one way to play a game. For example, I spent a few weeks playing Farmville. I had a spreadsheet with production options, so I could easily choose the best ones. I wrote an AutoIt script that clicked my fields, so instead of clicking 100 times to harvest my fields, I only clicked a button to start the script and left it running for a minute or two. I focused on those types of production that I could automate using the script. So I believe I enjoyed the game on a higher level than usual.
But it still took a lot of time to run the script regularly. And at some moment I ran out of options: the requirements to reach the next level were increasing exponentially, my production capacities linearly. Somewhere around level 100, even using the best available options, it would take a few days of doing exactly the same thing over and over again to reach level 101, and then even more days of the same thing to reach level 102, and it would only keep getting worse. The game was designed so that it was impossible to get more than 2 XP per 1 mouse click; and even with my script it meant at most 200 XP per running the script. My strategy brought me lots of gold and other in-game currency, but it was impossible to trade any of them for XP in a way that didn’t require at least 1 mouse click per 2 XP. And XP was the only way to get higher level and potentially unlock new items. So at this moment the game became pointless.
What I don’t like about most online games is that they are open-ended. There is no incentive for the game author to ever tell you “YOU WON, GAME OVER”. The only ending is that at some moment you become bored and quit; it can happen sooner or later, but it’s the only way the game can end.
It feels like there’s been a push towards getting people to start creating their own content. Platforms like YouTube + the Internet make it a lot easier for people to start.
Growing an audience, though, seems hard because there’s not often a lot of free attention. Most of the competition is zero-sum between different content. People only have so much free time, so minutes they spend engaging with your stuff is minutes they don’t spend engaging in other people’s stuff.
There’s a cynical viewpoint here which is something like “If you don’t think you’re creating Good Content, don’t broadcast it! We have enough low-quality stuff as it is, out there.”
I think people often want to create, though. It’s one of the default responses people have if you ask them “Say you could live comfortably without needing to work. What would you do then?” (“Well, I’d write. Or I’d learn to play an instrument...”)
Often, though, implementation takes far more time than coming up with the initial idea. There is an asymmetry across many fields where the actual ideation is done by only a small group of people. This then requires maybe 10X as many people to actually put into practice. (EX: the people who design the look/feel of a piece of software at a company vs those who build it.)
Thus, if you want people to join your project (which is of course great because you came up with it), you’ll need to convince other people to go with you. On the flip side, I think there’s a skill worth practicing where you let go of idea ownership. Stuff is going to get done, and you’re going to be doing it; whoever came up with the idea might be less important than whether or not you want the stuff to happen.
But maybe the desire for individual ideation points to something important. A really large amount of people seem to want to partake in creative endeavors.
Years ago, I participated at making open-source computer games. My experience suggests that if you want people to join your project, you should show them that you are able to do the project alone, if necessary.
You should make, as soon as possible, a version 0.01 of the game, containing a playable first level with ugly graphics. That demonstrates that given enough time, you would be able to complete the game (you could complete the code, and you could hire someone to do graphics and music). Thus people who decide about joining you have a signal that their contribution in this project will be meaningful: that likely one day their work will be a part of a complete product. (Another way to achieve the same outcome is to already have another project complete, and make that known. But you can’t do this with your first project, obviously.)
For the contributor, it is a sad experience if they spend time and energy on your project, and it never gets completed. People who got burned like this will be looking for red flags. “All talk and no code” is pretty bad, even if you have detailed plans and tons of beautiful pictures or 3D models. (I knew a group of people who spent the first year doing detailed 3D models containing thousands of polygons, without writing a single line of code. The models were truly beautiful. But after the year, when they started writing code, they found out that actually the hardware existing at that moment was unable to draw one such model in real time… and they planned to have hundreds. The project was never finished. You want to find out this kind of bad news as soon as possible.)
I think this generalizes outside of computer games like this: convince people that the project would eventually get completed even without their help, preferably by doing a smaller version of what you want to do. If you want to write a book, write the first chapter. If you want to draw a comic, draw a few characters and then an example page or two of the story. If you want to organize a summer camp, organize a small party...
And I completely agree that sometimes (actually, quite often) the right thing to do is to join someone else’s project. But then you need to examine the red flags.
Creative work is a signal of many things. Most directly, the skills you are using, but also self-discipline and long-term thinking (that you spent your time learning the skill, instead of e.g. reading social networks in your free time), social skills (if the project requires cooperation and support of other people), and also wealth (the less time and energy you waste doing your daily job, the more you can spend on your hobby). Of course people would be happy to radiate these signals.
But I am afraid it is a zero-sum game. I mean, from the perspective of the creative people competing for the audience. If as a side effect, many cool things are produced, that is a positive externality.
Your advice about demonstrating that you are capable alone is really interesting. Thanks for the extended examples!
Here’s something that feels like another instance of the deontologist vs consequentialist abstraction, except that the particulars of the situation are what stick out to me: When I choose between doing something sane or something that’s endorsed by an official rule, I’ll more-often-than-I-like opt to do the endorsed thing, even when it’s obviously worse-off for me.
Some examples, of varying quality:
Not jaywalking, even when it’s in a neighborhood or otherwise not-crowded place.
Asking for permission to do obvious things instead of just doing them
Focusing on the literal words that someone initially said, rather than their intent, or if they later recant.
Letting harmful policies happen instead of appealing them.
I’m reminded of that study which showed that people following an evacuation robot were led to stay in a room even when there was a fire, even when the robot was observed to be previously faulty. There’s something about rules that overrides appealing to sanity. I’m a little worried that I bias towards this side compared to just doing the thing that works out better.
There are of course benefits to choosing the official option. The biggest one is that if someone questions your judgment later on, you can appeal to the established rules. That gives you a lot of social backing to lean on.
I think there’s also a weird masochistic aspect of craving pity, of wanting to be in a situation that seems bad by virtue of nature, so I can absolve myself of responsibility. Something about how this used to be a play to secure ourselves more resources, through a pity play?
Malcolm Ocean gets it. There’s a terrible thing that happens when you try to encapsulate your essay with a title. Somehow, the label takes on a life of its own, and you sometimes forget the content inside the essay.
This happens to my own essays where I think “Oh, huh, this essay is called ‘Learning from Past Experiences’”. Sounds kinda boring.
And in fact it was not boring and it was good.
I’m thinking of maybe instead transitioning to just numbers + summaries instead.
For example, a format like: Essay 10 [Fading novelty, ways to address it, and a brief digression into typography.]
I’ve been thinking about interpretable models. If we have some system making decisions for us, it seems good if we can ask it “Why did you suggest action X?” and get back something intelligible.
So I read up about what sorts of things other people have come up with. Something that seemed cool was this idea of tree regularization. The idea being that decision trees are sort of the standard for interpretable models because they typically make splits along features. You essentially train a regularizer (which is a neural net) which proxies average tree length (i.e. the complexity of a decision tree which is comparable to the actual model you’re training). Then, when you’re done, you can train a new decision tree which mimics the final neural net (the one you trained with the regularizer).
The author pointed out that, in the process of doing so, you can see what features the model thinks are relevant. Sometimes they don’t make sense, but the whole point is that you can at least tell that they don’t make sense (from a human perspective) because the model is less opaque. You know more than just “well, it’s a linear combination of the inputs, followed by some nonlinear transformations, repeated a bunch of times”.
But if the features don’t seem to make sense, I’d still like to know why they were selected. If the system tells us “I suggested decision X because of factors A, B, and C” and C seems really surprising to us, I’d like to know what value it’s providing to the prediction.
I’m not sure what sort of justification we could expect from the model, though. Something like “Well, there was this regularity that I observed in all of the data you gave me, concerning factor C,” seems like what’s happening behind the scenes. Maybe that’s a sign for us to investigate more in the world, and the responsibility shouldn’t be on the system. But, still, food for thought.