I used to be a professional games programmer and designer and I’m very interested in fun. There are a couple of good books on the subject: A theory of fun and Rules of play. As a designer I spent many months analyzing sales figures for both computer games and other conventional toys. The patterns within them are quite interesting: for example child’s toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). My ultimate conclusions were that fun takes many forms whose source can be ultimately reduced to what motivates us. In effect, fun things are mental hacks of our intrinsic motivations. I gave a couple of talks on my take on what these motivations are. I’d be happy to repeat this material here (or upload and link to the videos if people prefer).
JohnDavidBustard
Reasonably Fun
Lets not forget, arguably the most important reason.
Because it makes us feel good.
We can feel superior to others, because we can do something that few other people can. We can collect instances where our approach is beneficial and use that to validate our self worth. And we can form a community that validates our strengths and ignores our weaknesses. All perfectly reasonable motivations (provided our satisfaction is a reasonable goal).
In my own field (Computer Vision), there are those who pursue it rationally (with rigorous mathematical analysis) and those who pursue it heuristically (creating a variety of systems and testing them on small samples). These approaches seem to mirror the determined search for truth and the pragmatic “go with what feels like it works” approaches. Without rigorously analysing them (although this may be possible) both approaches seem to deliver benefit with no clear winner in terms of delivering approaches that are practically applied or used as the basis for further work. I think it is interesting to apply this meta analysis to reason, i.e. can we scientifically determine whether approaching problems reasonably conveys advantage? Is there an optimal balance?
I like your post because it makes me feel bad.
What I mean by that is that it gets at something really important that I don’t like. The problem is that I get more pleasure from debates than almost anything else. I search for people who don’t react in the intensely negative way you describe, and I find it hard to empathise with those that do. I don’t do this because I think one method is ‘right’ and the other ‘wrong’ I just don’t enjoy trying to conform to others expectations and prefer to find others who can behave in the same way. I think for most people deep down, community is more important than ideology (or indeed achieving anything), but a community where you cannot be yourself is one in which you always feel uncomfortable, whether this is intellectually confrontational or indirect. Does anyone know of any other environments like Less Wrong where an intellectually direct way of communicating wont get you flamed to death?
[Question] Any taxonomies of conscious experience?
Hi all, I’m John Bustard. I was suggested this site by a friend and I’ve just started getting into it. I’m a PhD student in computer vision, with a basic need for intellectual discussions (nice food and good debates are pretty close to heaven for me). I’m also very keen on improving my knowledge of statistical learning, which I feel is the key to understanding truth (the formalisation of understanding). I’m a fan of the singularity with a preference for brain scanning and simulation as the triggering event. Above all, however, I’m attracted by the sense of community this site represents. I feel a great empathy with those whose posts reflect a dissatisfaction and frustration with the world around them. I have recently started being a bit more public about my own views, primarily in the hope of finding others who feel similarly. My posts on my own site tend to be more personal and much less rigorous. In part, so that I can talk about ideas that are hard to be rigorous about, but also as an honest analysis of my own feelings. Please feel free to criticise them at the site. I’ll be much more thorough with the posts I make here. I hope I can contribute something interesting and look forward to reading your impressive catalogue of articles.
I would suggest both, and I would add that I don’t think this inherently diminishes the value of pursuing truth. I am increasingly of the belief that in order to be content it is necessary to pick ones community and embrace its values. What I love about this community is its willingness to question itself as much as the views of others. I think it’s useful to acknowledge what we really enjoy and be hesitant of explanations that attribute objective value to enjoyable activities. Doing so risks erasing self doubt and can lead to the adoption of strong moral values that distort our lives to such an extent that they ultimately make us miserable.
One frustration I find with mathematics is that it is rarely presented like other ideas. For example, few books seem to explain why something is being explained prior to the explanation. They don’t start with a problem, outline its solution provide the solution and then summarise this process at the end. They present one ‘interesting’ proof after another requiring a lot of faith and patience from the reader. Likewise they rarely include grounded examples within the proofs so that the underlying meaning of the terms can be maintained. It is as if the field is constructed so that it is in the form of puzzles rather than providing a sincere attempt to communicate idea as clearly as possible. Another analogy would be programming without the comments.
A book like Numerical Recipies, or possibly Jaynes book on probability, is the closest I’ve found so far. Has anyone encountered similar books?
Yes I take your point. There isn’t a lot of material on fun, and game design analysis is often very genre specific. I like rules of play, not so much because it provides great insight into why games are fun but more as a first step towards being a bit more rigorous about what game mechanics actually are. There is definitely a lot further to go and there is a tendency to ignore the cultural and psychological motivations (e.g. why being a gangster and free roaming mechanics work well together) in favour of analysing abstract games. However it is fascinating to imagine a minimal game, in fact some of the most successful game titles have stripped the interactions down to their most basic motivating mechanics (Farmville or Diablo for example) To provide a concrete example, I worked on a game (Medievil Resurrection) where the player controlled a crossbow in a minigame, by adjusting the speed and acceleration of the mapping between joystick and bow the sensation of controlling it passed through distinct stages. As the parameters approach the sweet spot, my mind (and that of other testers) experienced a transition from feeling I was controlling the bow indirectly to feeling like I was holding the bow. Deviating slightly around this value adjusted its perceived weight, but there was a concrete point at which this sensation was lost. Although Rules of Play does not cover this kind of material it did feel for me like an attempt to examine games in a more general way so that these kinds of element could be extracted from their genre specific contexts and be understood in isolation.
This may be entering into dangerous territory but to what extent does the psychology of a suicide bomber differ from that of say a first world war soldier.
In both cases their death is guaranteed, and in both cases they view the justification as being the protection of their community. Would the outcome of losing such a war be bad enough to justify most men risking their lives? Perhaps what is strange is having a society where killing yourself for a cause is rare?
Thank you, I also agree with your comments on your posting. I generally prefer a balance of pragmatic action with theory. In fact, I view the ‘have a go’ approach to theoretical understanding to be very useful as well. I think just roughly listing ones thoughts on a topic and then categorising them can be very revealing and really help provide perspective. I recently had a go at my priorities (utility function) and came up with the following:
To be loved
To be wise
To create things that I am proud of
To be entertained
To be respected
To be independent (ideally including being safe, relatively healthy and financially secure)
This is probably not perfect but it is something to build on (and a list I wouldn’t mind a friendly AI optimising for either).
Also, as with the positive effects mentioned in your article, I’ve found giving to charity makes it easier for me to feel love (or at least friendship) towards others and to feel more cared for in return (perhaps simply because giving to charity makes me slightly nicer towards everyone I meet).
My current focus is wisdom, I feel uncomfortable that I don’t have perspective on problems in society or the structure of the economy (i.e. how my quality of life is maintained). When I mention these ideas to others their reaction is generally to describe the problems as being too hard or impossible, I think this is a very interesting form of rationality failure, because the same people would go to enormous lengths to construct a solution to a technical problem if they were told it was not possible. Why don’t creative, intellectual and rational people apply their problem solving skills to these kinds of issues? Why don’t they ‘have a go’?
I really like this post. It touches on two topics that I am very interested in:
How society shapes our values (domesticates us)
and
What should we value (what is the meaning of life?)
I find the majority of discussions extremely narrow, focusing on details while rarely attempting to provide perspective. Like doing science without a theory, just performing lots of specific experiments without context or purpose.
1 Why are things the way they are and why do we value the things we value? A social and psychological focus, Less Wrong touches on these issues but appears focused on specific psychological studies rather than any overall perspective (I suspect this would start to touch on politics and so would not be discussed). I think our understanding of the system we are a part of significantly shapes our sense of meaning and purpose and, as a result, strongly influences our society.
I would go so far as to suggest we are psychologically incapable of pursuing goals that are inconsistent with our understanding of how the universe functions (sorry Clippy), i.e. if we are selfish gene darwinists we will value winning and reproductive success. If we have a Confucian belief that the universe is a conflict between order and chaos we will pursue social stability and tradition. I have my own take on this for those who are interested (How we obtain our values, the meaning of life)
2 What problems do we want to solve? It seems much easier to find problems to solve than goals to obtain. A recent post about Charity mentioned GiveWell. This organisation at least evaluates whether progress is made but as far as I am aware there is no economics of suffering no utilitarian (or otherwise) analysis of the relative significance of different problems. Is a destructive AI worse than global warming, or cancer or child abuse or obesity or terrorism. Is there a rational means to evaluate this for a given utility function? Has anyone tried? (this is an area I’m looking into so any links would be greatly appreciated)
3 What can we do? Within instrumental rationality and related fields there are a lot of discussions of actions to achieve improvements in capability. Likewise for charity, lots of good causes. However there seems to be relatively little discussion of what is likely to be achieved as a result of the action, as if any progress is justification enough to focus on it. For example, what will be the difference in quality of life if I pursue a maximally healthy lifestyle vs a typical no exercise slacker life. In particular, do I want to die of a heart attack or cancer and alzheimers (which given my family history are the two ways I’m likely to go). If we had a realistic assessment of return on investment, as well as how psychologically likely we are to achieve things, we could focus our actions rationally.
I suggest that if we know how things work, what the problems are and what we can do about them, then we have a pretty good start on the meaning of life. I am frequently frustrated by the lack of perspective on these issues, we seem culturally conditioned to focus on action and specific theoretical points rather than trying to get a handle on it all. Of course that might be more fun, and that might be a sensible utility function. But for my own peace of mind I’d like to check there isn’t an alternative.
I haven’t performed any formal psychological evaluations (I’m not sure my bosses would have approved :) ) however the process of forming this theory did stem from informal experimentation.
In the case of pitching game concepts I experienced a dramatic change in their reviews once I started to explicitly construct them using this theory (roughly 3 months of trying other approaches). I constructed the concepts by finding popular themes for a given demographic (through sales figures) and then translating them into distinctive game experiences.
Perhaps more convincingly I experienced a similar pattern in constructing the AI for Kung-fu Chaos. I spent a number of months constructing an AI that attempted to mimic the psychology of real players, but it was not enjoyable to play. A dramatic improvement came when I constructed the AI as a puzzle. First creating a ‘perfect’ AI opponent and then explicitly creating a set of mistakes that the AI could make. Each opponent type was then given a different probability distribution over these mistakes. Watching playtesters interact with the game, joy was expressed not just when an opponent was beaten, but more specifically when ‘they got the hang of these guys’ before they had beaten many of them. I realise this relates to the enjoyment of solving problems, rather than the aesthetics I focus on in the article, however, It can also be seen as an enjoyment of model validation, explaining why a ‘well constructed’ model heavy game (like many board games) are so enjoyable even if they aren’t particularly thematically relevant (Powergrid for example).
I constructed the theory using this kind of game development experience combined with an attempt to explain aesthetic measures, such as facial beauty, composition, colour coordination etc.
I think this post starts to get to the heart of why ideas are frightening.
At first glance it seems strange to have evolved any mental system that attributes such weight to something (intellectual discussion) that has no immediate survival consequences.
However studies have shown that status (community judgments of different members value) and legitimacy (whether a person has committed an appropriate or socially taboo action) do carry with them significant effects on survival, and in severe cases can last across generations (making them worse than say, being eaten by an animal). This is because status determines who has influence (and may determine if one gets to eat or not), and legitimacy determines whether one is attacked (in a communities eyes, punished) with people being so willing to enforce these ideas that they are willing to suffer in order to maintain them.
In this sense the quote is entirely correct, thought is the most terrifying thing because thought carries with it changes in status and legitimacy rules. The examples in the quote demonstrate the power of thought, highlighting the kind of traditional social defenses thought can destroy.
An insult, is the very name we give to incidents of this fear, the more directly we concentrate on the person speaking the more obvious the association, but fundamentally when thought is most powerful it alters our status and legitimacy values, and so, regardless of how obliquely we make statements, they are always going to be frightening, and thus experienced as an insult.
I think the reason you can tell that people are afraid is because they start getting angry at what you have said. The more the discussion occurs the angrier they get. If you’re not afraid, the expected response would be interest (why do you think that?) or boredom. Many discussions become angry, so I suggest most discussions are frightening and by extension the thought that caused the discussion in the first place could well be scary all by itself.
Thanks for the comment, I think it is very interesting to think about the minimum complexity algorithm that could plausibly be able to have each conscious experience. The fact that we remember events and talk about them and can describe how they are similar e.g. blue is cold and sad, implies that our internal mental representations and the connections we can make between them must be structured in a certain way. It is fascinating to think about what the simplest ‘feeling’ algorithm might be, and exciting to think that we may someday be able to create new conscious sensations by integrating our minds with new algorithms.
From what I understand, in order to apply Bayesian approaches in practical situations it is necessary to make assumptions which have no formal justification, such as the distribution of priors or the local similarity of analogue measures (so that similar but not exact predictions can be informative). This changes the problem without necessarily solving it. In addition, it doesn’t address the issue of AI problems not based on repeated experience, e.g. automated theorem proving. The advantage of statistical approaches such as SVMs is that they produce practically beneficial results with limited parameters. With parameter search techniques they can achieve fully automated predictions that often have good experimental results. Regardless of whether Bayesianism is the law of inference, if such approaches cannot be applied automatically they are fundamentally incomplete and only as valid as the assumptions they are used with. If Bayesian approaches carry a fundamental advantage over these techniques why is this not reflected in their practical performance on real world AI problems such as face recognition?
Oh and bring on the down votes you theory loving zealots :)
I suggest just getting some casual exercise or watching some good films and tv shows. They’re full of emotionally motivating experiences.
I think there is a worrying tendency to promote puritan values on LW. I personally see no moral problem with procrastination, or even feeling bad every so often. I feel worried that I might not hit deadlines or experience some practical consequence from not working on a task but I wouldn’t want to add moral guilt. I think if people lose sight of the pleasures in life they become nihilistic which in turn leads them to be selfish and cruel as an expression of their pain.
If you can feel good about yourself and recognise that the positive playful fun that can come with idle pleasures might actually be the point. They represent the one value system that does seem pretty sensible. If you can enjoy them, you can feel the emotional energy to be nice and supportive to others. I certainly don’t want a friendly AI enforcing the morality of anti-procrastination, anti-unhealthy eating, anti-indulgence or any other form of self flagellating self improvement. Lets just be supportive of one another and try to have a good time.
I think your very first step Identify is the key to all this.
Is it rational to pursue an irrational goal rationally?
Our culture focuses on external validation, achievement and winning. My concern is that this is a form of manipulation focused on improving a societies economic measures of value over an individual’s personal satisfaction.
In contrast, the science of happiness seems like a good start. This work seems to focus on developing techniques to come to feel satisfaction with ones current state. Perhaps a next step is to look at how communities and organisations can be structured to support this. Speaking for myself I naively assumed that making computer games would be an enjoyable career because I thought that making a game and playing a game would be similar, this is not the case. Does anyone have any suggestions for careers or lifestyles where one can feel a sustained sense of satisfaction? Or indeed a rational means to select/create one?
I’ve wrestled with this disparity myself, the distance between my goals and my actions. I’m quite emotional and when my goals and my emotions are aligned I’m capable of rapid and tireless productivity. At the same time my passions are fickle and frequently fail to match what I might reason out. Over the years I’ve tried to exert my will over them, developing emotionally powerful personal stories and habits to try and control them. But every time I have done so it tends to cause more problems that it fixes. I experience a lot of stress fighting with myself in this way and quickly lose the ability to maintain perspective or, more importantly, to prioritise. My reason becomes a tunnel visioned rationalisation, and rather than being a tool for appropriate action becomes a tool to reinforce an unwise initial judgement of my priorities.
More recently, I’ve come to accept that my conscious reasoning self is, to an extent, a passenger in an emotional mind. What’s more, that that emotional mind often has a much more sophisticated understanding of what will lead to a satisfying future than my own reasoning can provide. If I have the patience to listen (and occasionally offer it suggestions) I seem to get much closer to solving creative and technical problems, and more importantly, much closer to contentment, than if I try to force myself to follow an existing plan.
I think there is a real risk of having ones culture and community define goals for ourselves that are not actually what we want. Causing us to feel a sense of duty towards values that deep down, we don’t share. Is our reasoning flawed or do we just not understand our utility function?