Independent AI alignment researcher
Alex Flint
Very interesting indeed. Suppose we replaced “4. simulate the opponent’s reaction to you” and instead had
Simulate the opponents reaction many times and learn the probability p with which they will defect.
If p < k then cooperate, else defect (for some constant k)
Game theory with machine learning, what do you think?
Hi,
I’m Alex and I’m studying computer vision at Oxford. Essentially we’re trying to build AI that understands the visual world. We use lots of machine learning, probabilistic inference, and even a bit of signal processing. I arrived here through the Future of Humanity Institute website, which I found after listening to Nick Bostrom’s TED talk. I’ve been lurking for a few weeks now but I thought I should finally introduce myself.
I find the rationalist discussion on LW interesting both on a personal interest level, and in relation to my work. I would like to get some discussion going on the relationship between some of the concrete tools and techniques we use in AI and the more abstract models of rationality being discussed here. Out of interest, how many people here have some kind of computer science background?
I agree but I think that there are some other forces at play in fashion too.
Fashion sure involves an element of exaggerating desirable body traits with clothing. High-heel shoes that make the wearer appear taller, jackets that extend and exaggerate the shoulders, and dresses that enlarge and exaggerate the waist and breasts are some examples.
I suspect that there is also an element of intentionally identifying as part of a group by wearing similar clothing, regardless of whether that group is high-status or not.
Any others?
Life did not die out on Earth, or on any particular environment where it previously thrived, in spite of major changes in temperature, composition of atmosphere, and multiple large scale disasters. This suggests life is very resilient. Every time life is wiped out in some part of Earth, it is quickly recolonized.
Be careful of anthropic bias here. Taken alone, the argument “life did not die out on Earth” is invalid because if it had, we wouldn’t be here. However, the second point, that when some evolutionary niche is wiped out it is quickly colonized, would seem to me to be valid, since it suggests systematic resilience to disaster.
Quickly break a problem into tractible pieces.
I learned this mostly from watching the most effective people around me—my bos when I worked for a web design firm, a few particular fellow students during my PhD, and so on.
Nice article Kaj—this is a phenomena I’ve come up against myself several times, so it’s really nice to see a carefully worked analysis of this situation. In a probabilistic sense, perhaps intuitive differences are priors that arise from evidence that a person no longer recalls directly, so although the the person may have rationally based their belief on evidence, they are unable to convince another person since they do not have the original evidence at hand. I’m particularly thinking of cases where “evidence” comprises many small experiences over a prolonged period, making it particularly difficult to replay to another person. A carpenter’s intuition about the strength of a piece arises from their experiences working with wood, but no single piece of evidence could be easily recalled to transfer that intuition to someone else.
The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor. When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category.
This is also how linear classifiers in machine learning work, and many other statistical classifiers just replace “sum” with “something else” (support vector machines etc). On pattern recognition problems like “does this image contain a tree?” or “will this person return their loan?” they far outperform human-tuned decision trees, which classify by asking a series of yes/no questions. That’s the nature of the complex sensory information we have to process, and it’s not surprising that our brains work like that.
I support the idea
I agree with Julian completely but I would add the observation that there are no countries today with anything remotely resembling pure capitalism. Europe, the US, and the remainder of the traditional “west” are particularly far away from such an ideal.
We would have to enforce each of these below some age. It’s never going to be a good idea to let two year olds hold political office and drive cars, and I think this holds for every one of the items you’ve mentioned. The debate is only which age is the correct cutoff, and I agree that this parameter may need re-evaluating.
A utilitarian will evaluate the parents’ happiness along with the child’s. In this view, a parent may be right in applying rules to their child that increase their own well-being to a greater extent than their child’s situation is worsened, so long as overall happiness is increased.
“and can be forced to take psychoactive medications against their will.” is one I had… opinions… about. And yes, while I was being forced to do so. (to this day, I’m not sure my parents ever managed to really comprehend what it was that I objected to)
Wow, I just read about Ritalin on wikipedia… ugh. I would be nearly as worried by doctors prescribing ritalin for gullible adults as for children.
Excellent idea! I’ve tried various anti-procrastination schemes but not this.
On another note, one thing I’ve noticed of myself is that at the moment that I have an important insight or get something really working I’m inclined to get up from my desk and grab a drink or go talk to a friend or something similar. It always involves getting up and walking away from my desk, and I never actually need the drink or have a good reason to chat. I often don’t realise that I’m going it until I’m a couple of paces away. Anybody else experienced this?
I think it is possible that pushing oneself close to the limit of one’s willpower reserves could cause increase overall reserves in the future.
Consider the case of over-eating, in which pushing oneself close to the limit of stomach capacity causes the stomach to stretch and hence increase in capacity for the future.
That’s not to say it actually does, just that it could.
More importantly, though, the “willpower reserve” is a fairly coarse model, not a detailed map of actual brain functioning (though if anyone knows of detailed empirical investigation of this phenomenon then I’d be very interested). I don’t think it’s productive to probe in such detail—it’s like trying to discern houses from a low-resolution map of the entire Earth.
Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don’t exist, and so I look for proof of relevant talent and learning rate.
I’m not sure the olympiads are such a uniquely optimal selector. For sure there were lots of superstars at the IOI, but now doing a phd makes me realise that many of those small-scale problem solving skills don’t necessarily transfer to broader-scale AI research (putting together a body of work, seeing analogies between different theories, predicting which research direction will be most fruitful). Equally I met a ton of superstars working at Google, and I mean deeply brilliant superstars, not just well-trained professional coders. Google is trying to attract much the same crowd as SIAI, but they have a ton more resources, so insofar as it’s possible it makes sense to try to recruit people from Google.
Well, I don’t think Google is working on GAI explicitly (though I wouldn’t know), and I think they’re not working on it for much the same reason that most research labs aren’t working on it: it’s difficult, risky research, outside the mainstream dogma, and most people don’t put very much thought into the implications.
Fair point. I actually rate (1) quite low just because there are so few people that think along the lines of AGI as an immediate problem to be solved. Tenured professors, for example, have a very high degree of freedom, yet very few of them chose to pursue AGI in comparison to the manpower dedicated to other AI fields. Amongst Googlers there is presumably also a very small fraction of folks potentially willing to tackle AGI head-on.
Well for the olympiads, each country runs training camp leading up to the actual olympiad and they’d probably be more than happy to have someone from SIAI give a guest lecture. These kids would easily pick up the whole problem from a half hour talk.
Google also has guest speakers and someone from SIAI could certainly go along and give a talk. It’s a much more difficult nut to crack as Google has a somewhat insular culture and they’re constantly dealing with overblown hype so many may tune out as soon as something that sounds too “futuristic” comes up.
What do you think?
Sure, but only in Australia I’m afraid :). If there’s anyone from SIAI in that part of the world then I’m happy to put them in contact.
Nice. What happens if you think you’re right on the cusp of a grade boundary, as in OP’s footnote 1? I think there are cases to be considered for when you’re right under a grade boundary and right above a grade boundary, and the value you place on a grade change versus potential increase/decrease in intra-grade marks. All together, fairly mathematically taxing to be rational...