No need for him to contact me, better if he came and left a comment here (I’d also be happy to email him and relay the comment)
(sorry for taking over a month to reply)
I think we should consider looking at indigenous stories, I’ve heard that a lot of them encode genuinely useful knowledge about ecosystems, this can be a bit hard to discern from the outside (if you don’t live in the old ways in the old ecosystems you wont understand the myths), and colonisation is often so brutal that the connection is lastingly obscured, the ecosystem knowledge is lost and by the time the survivors are ready to return to their roots, the connection is lost, the myths are dead, don’t mean what they used to, don’t serve a purpose in the new world. Some peoples recognise this and try to start remaking their myths (per my recommendation), but it’s hard to tell how the new myths reflect the old ones.
Something I worry about is that a lot of things that qualified as entertainment to the ancients aren’t recognisable as entertainment to me. I could potentially be very confused by that. Was it really funny or insightful given the right cultural background, or were the audience just starved for novelty and willing to accept the bare minimum amount of wit? Was it a tacit metaphor for something or were they just amused by the idea of a literal talking fox?
How do you calculate logical correlation? Do we know anything about how this would work under UDT? Does UDT not really discuss it, or is it bad at it?
I feel about partial correlation the way I used to feel about the categorical imperative in general; I don’t think our formalisations discuss it well at all. However. I know that the CDT way is wrong and I need a name for whatever the better way is supposed to be. What would you recommend. “Newcomblike reasoning”?
That may be true, but it is not a product of the general public not knowing UDT. A large number of people don’t think or act in a CDT way either, and a lot of people that don’t care for decision theory follow the categorical imperative.
I agree with avturchin, it’s an appropriate thought to be having. UDT-like reasoning is actually fairly common in populations that have not been tainted with CDT rationality (IE, normal people) (usually it is written off by cdt rationalists as moralising or collectivism). This line of thinking doesn’t require exact equivalence, the fact that there are many other people telling many other communities to prep is enough that all of those communities should consider the aggregate effects of that reasoning process. They are all capable of saying “what if everyone else did this as well? Wouldn’t it be bad? Should we really do it?”
Decrease the likelihood of others developing and/or sharing the information
Promote ideas that make the information hazard seem ridiculous or uninteresting. An example that may or may not be happening is the US government enabling stories of extraterrestrial origin to hide the possibility that they have unreasonably advanced aerospace technology, materially, by encasing it in dumb glowy saucer stuff that doesn’t make any sense. (a probably fictional example is good because if someone was smart enough and motivated enough to hide something like this, I probably wouldn’t want to tell people about it (if this turns out not to be fictional, USgovt, I’m very sorry, we haven’t thought enough about this to understand why you’d want to hide it.))
If the information hazard concerned is going to be around for a long time, you might want to consider constructing an ideological structure that systematically hides the information hazard, under which the only people who get anywhere near questioning enough of their assumptions to find the information hazard also tend to be responsible enough to take it, and where the spread of the information hazard is universally limited. Cease speaking the words that make it articulable. It should be noted, this wont look, from the inside, like a conspiracy. There will not be a single refutation of the idea, under this ideology, because no one would think to write it. It will just seem naturally difficult for most people living under it to notice how the idea might ever be important.
I’m not sure what use “biggest fan” would have as a term, if it meant that. We would rarely ever want to look at or talk about the biggest fans of almost anything. To like something more than anyone else, you have to be weird. Per The Winner’s Curse, to get to the top, they’ll usually need to have made a mistake somewhere in their estimation of it, to like it a bit more than anyone should.
Perhaps if “fandom” should come to mean “understanding”. You do have to like something quite a bit to come to understand it very well (though many will claim to understand a thing they dislike better than the people who like it, they are generally recognisably wrong)
Can I infer via nominative determinism that Scott Anderson is a friend of the rationalist community?
Unlike the thing the litany of gendlin addresses, anger is sometimes warranted. I think this calls for a different approach.
Anger is for punishing violations of moral codes. Did the subject of my anger really know my code?
We live in a big world. There are many different moral codes trying to coexist. I don’t know every code. Some of them don’t have names or signifiers. Was the subject of my anger following their own code?
If different codes conflict, that calls for a very sophisticated response.
Why aren’t there Knowers of Character who Investigate all Incidents Thoroughly Enough for The Rest of The Community to Defer To, already? Isn’t that a natural role that many people would like to play?
Is it just that the community hasn’t explicitly formed consensus that the people who’re already very close to being in that role can be trusted, and forming that consensus takes a little bit of work?
I’d guess there weren’t as many nutcases in the average ancestral climate, as there are in modern news/rumor mills. We underestimate how often it’s going to turn out that there wasn’t really a reason they did those things.
I’ve heard of Zendo and I’ve been looking for someone to play Eleusis with for a while heh (maybe I’ll be able to get the local EA group to do it one of these days).
though insofar as they’re optimized for training rationality, they won’t be as fun as games optimized purely for being fun
Fun isn’t a generic substance. Fun is subjective. A person’s sense of fun is informed by something. If you’ve internalised the rationalist ethos, if your gut trusts your mind, if you know deeply that rationality is useful and that training it is important, a game that trains rationality is going to be a lot of fun for you.
This is something I see often during playtesting. The people who’re quickest to give up on the game tend to be the people who don’t think experimentation and hypothesising has any place in their life.
I am worried about transfer failure. I guess I need to include discussion of the themes of the game and how they apply to real world situations. Stories about wrong theories, right theories, the power of theorising, the importance of looking closely at cases that break our theories.
I need to… make sure that people can find the symmetry between the game and parts of their lives.
If you have an android phone, sure. I’ll DM you a link to the apk. I should note, it’s pretty brutal right now and I have not yet found a way to introduce enough primitives to the player to make really strict tests, so it’s possible to guess your way all the way to the end. Consider the objective to be figure out the laws, rather than solve the puzzles.
The next question is, why aren’t people buying the offsetting? I seem to remembering hearing that it was once an option in most ticket purchase processes, but it must have been an unpopular choice, because the option has disappeared and now offsetting is going to be legally mandated, but apparently the legal mandate does not require enough offsetting to be done (past discussion: https://www.lesswrong.com/posts/XRTiojqqJ3wrFFZAf/can-we-really-prevent-all-warming-for-less-than-10busd-with#EbEWLtgcLQXzHjzCb )
This is probably the least important question (the answer is that some people are nuts) but also the one that I most want to see answered for some reason.
I’ve been developing a game. Systemically, it’s about developing accurate theories. The experience of generating theories, probing specimens, firing off experiments, figuring out where the theories go wrong, and refining the theories into fully general laws of nature which are reliable enough to create perfect solutions to complex problem statements. This might make it sound complicated, but it does all of that with relatively few components. Here’s a screenshot of the debug build of the game over a portion of the visual design scratchpad (ignore the bird thing, I was just doodling): https://makopool.com/fcfar.png
The rule/specimen/problemstatement is the thing on the left, the experiments/solutions that the player has tried are on the right. You can sort of see in the scratchpad that I’m planning to change how the rule is laid out to make it more central and to make the tree structure as clear as possible (although there’s currently an animation where it sort of jiggles the branches in a way that I think makes structure clear, it doesn’t look as good this way).
It might turn out to be something like a teaching tool. It illuminates a part of cognition that I think we’re all very interested in, not just comprehension, it also tests/trains (I would love to know which) directed creative problemsolving. It seems to reliably teach how frequently and inevitably our right-seeming theories will be wrong.
Playtesting it has been… kind of profound. I’ll see a playtester develop a wrong theory and I’ll see directly that there’s no other way it could have gone. They could not have simply chosen to reserve judgement and not be wrong. They came up with a theory that made sense given the data they’d seen, and they had to be wrong. It is now impossible for me to fall for it when I’m presented with assertions like “It’s our best theory and it’s only wrong 16% of the time”. To coin an idiom.. you could easily hide the curvature of the earth behind an error rate that high, I know this because I’ve experienced watching all of my smartest friends try their best to get the truth and end up with something else instead.
The game will have to teach people to listen closely to anomalous cases and explore their borders until they find the final simple truth. People who aren’t familiar with that kind of thinking tend to give up on the game very quickly. People who are familiar with that kind of thinking tend to find it very rewarding. It would be utterly impotent for me to only try to reach the group who already know most of what the game has to show them. It would be easy to do that. I really really hope I have the patience to struggle and figure out how to reach the group who does not yet understand why the game is fun, instead. It could fail to happen. I’ve burned out before.
My question: what do you think of that, what do you think of the witness, and would you have any suggestions as to how I could figure out whether the game has the intended effects as a teaching tool.
No. Measure decrease is bad enough to more than outweigh the utility of the winning timelines. I can imagine some very specific variants that are essentially a technology for assigning specialist workloads to different timelines, but I don’t have enough physics to detail it, myself.
Sure. The question, there, is whether we should expect there to be any powerful agents with utility functions that care about that.