Syntactically it’s quite a bit better than an N-gram markov chain: it gets indentation exactly right, it balances parentheses, braces, and comment start/end markers, delimits strings with quotation marks, and so on. You’re right that it’s no better than a markov chain at understanding the “code” it’s producing, at least at the level a human programmer does.
Discussion on Hacker News. Definitely an interesting article, very readable and (to me) entertaining. But I agree with interstice that it doesn’t say much about strong AI.
Yes and no. Morality is certainly less fundamental than physics, but I would argue no less real a concept than “breakfast” or “love,” and has enough coherence – thingness – to be useful to try to outline and reason about.
The central feature of morality that needs explaining, as I understand it, is how certain behaviors or decisions make you feel in relation to how other people feel about your behaviors. Which is not something you have full control over. It is a distributed cognitive algorithm, a mechanism for directing social behavior through the sharing of affective judgements.
I’ll attempt to make this more concrete. Actions that are morally prohibited have consequences, both in the form of direct social censure (due to the moral rule itself) and indirect effects that might be social or otherwise. You can think of the direct social consequences as a fail-safe that stops dangerous behavior before real harm can occur, though of course it doesn’t always work very well. In this way the prudential sense of should is closely tied to the moral sense of should – sometimes in a pure, self-sustaining way, the original or imagined harm becoming a lost purpose.
None of this means that morality is a false concept. Even though you might explain why moral rules and emotions exist, or point out their arbitrariness, it’s still simplest and I’d argue ontologically justified to deal with morality the way most people do. Morality is a standing wave of behaviors and predictable shared attitudes towards them, and is as real as sound waves within the resonating cavity of a violin. Social behavior-and-attitude space is immense, but seems to contain attractors that we would recognize as moral.
That said, I do think it’s valuable to ask the more grounded questions of how outcomes make individuals feel, how people actually act, etc.
[Link] Small-game fallacies: a Problem for Prediction Markets
In my experience, micro optimizations like these represent yet another thing to keep track of. The upside is pretty small, while the potential downside (forget to cancel a card?) is larger. If you’re ok with paying the attentional overhead or it’s a source of entertainment, go for it.
Personally I’d rather use a standard rewards card (mine is 1.5% cash), not have to think about it, and spend my limited cognitive resources on doing well at my job, looking out for new opportunities with large upsides, working on side projects, or networking.
That’s interesting, because to me it read more like “I’m going to write something interesting about anything you like, do some research for you, and even share the results” and “as long as I have to do this assignment I might as well make it useful to someone” but maybe that’s because I recognized the poster’s name, read his blog, etc.
I can see how someone might interpret it this way, though.
Not something I actually did last month, since I wrote the piece two years ago, but it feels like it since that’s when the validation arrived. A blog post of mine hit /r/basicincome and then /r/futurism, which are sitting at ~470 (98% positive) and ~1080 (92% positive) votes respectively, and found its way to hacker news. Some of the discussion is pretty good. The relevant quote:
“Let us keep in mind how poorly we treat those who cannot currently contribute to society. Sooner or later we will have to face this question: how do we define personal worth in a world where most people have no economic value?”
The actual accomplishment of the month is a post on Christopher Alexander’s Notes on the Synthesis of Form, which won’t be as big a hit, and I’m ok with that.
[Link] YC President Sam Altman: The Software Revolution
Schmidhuber’s formulation of curiosity and interestingness as a (possibly the) human learning algorithm. Now when someone says “that’s interesting” I gain information about the situation, where previously I interpreted it purely as an expression of an emotion. I still see it primarily about emotion, but now understand the whys of the emotional response: it’s what (part of) our learning algorithm feels like from the inside.
There are some interesting signaling implications as well.
This, I assume? (It took me a few tries to find it since first I typed in the name wrong and then it turns out it’s “Wardley” with an ‘a’.) Is the video on that page a good introduction?
There is undoubtedly some slop built in to the system, both to cover ordinary fluctuations in demand (which is, after all, stochastic), and because inventory control is itself expensive and difficult and only worth doing up to a certain level of precision.
That said, there’s a fallacy here, the same one as in this recent post (addressed here, e.g.). In brief, what matters is not whether you cause stores to waste measurably less food with certainly, but the expected amount of change in food waste due to your actions, especially over the long term.
Speedcubing. I don’t recommend it, though—I started about a year ago and it sniped a significant amount of my free time in 2014, on the order of 400-500 hours. (I had a similar experience with Go in college.)
I’ve been fasting one day a week since the beginning of May of this year. I usually start Sunday evening and fast through Monday evening or Tuesday morning, around 24 to 36 hours, and this fits my schedule pretty well—alternate-day would be considerably more difficult. The trickiest part is declining offers from coworkers to go to lunch and then having to explain why. Sleeping through the night on Monday can be a little uncomfortable if I’m doing a longer fast.
I’ve fasted erratically for years (when I felt like it, which turned out to be once every month or two), but started the weekly cadence because I found out I had very high total cholesterol (~280 mg/dL) when I went to the doctor in May. When I donated blood in October my total cholesterol was down to ~190 mg/dL.
It’s hard to know how much of this effect to attribute to fasting, since I did make some other minor systematic changes to my diet (more fish, fewer pastries, a shift from butter to olive oil in cooking) and there might be other changes that I don’t know about or haven’t considered. Since I’m comfortable with this amount of fasting and since there are non-health-related benefits I suspect the VoI of a more careful experiment is low to negative. (I can imagine finding out there’s no fasting → cholesterol lowering effect, stopping the habit because of this, and losing out on the less tangible benefits.)
That’s consistent with my experience. That is, most people aren’t particularly impressed, or don’t want to let on that they are, and I’m only moderately impressed with myself. And I’m fine with that, since these days I make an effort not to indulge the urge to optimize for impressiveness, except evidently in threads like these.
Contrast this with juggling 5 balls, which is for me about the same level of difficulty (both in terms of learning the skill and performing it once learned). People are much more likely to be visibly impressed, though the way they show it isn’t always agreeable or complimentary.
Solved a Rubik’s cube in under 15 seconds. Still having trouble getting my averages below 25, though.
I generally agree, but I’d caution against raising threats to the level of mutual knowledge. Intuitively it feels dangerous to ask things like “are you threatening me?” Thinking about it for a few minutes, it seems that it’s dangerous in part because once a threat has been made explicit, the threatening party can no longer back down without losing face and credibility. The question also feels like a power play and can be seen as disrespectful.
It’s still good to know whether you’re just dealing with a hostile argument vs. a real threat vs. intimidation without intent to follow through, but when there’s a power differential it’s probably bad for the knowledge to be out in the open.
I consider myself a vim poweruser and this doesn’t match my experience. Vim is a great tool and I use it for a lot of things, but it’s absolutely not a replacement for bash, screen, Chrome, etc.
I haven’t been playing on KGS recently, but if you’re interested in a teaching game send me a PM and we can schedule something. I’m around 4k.
First of all, I can highly recommend Nachmanovitch’s Free Play. It’s at the very least thought-provoking and entertaining—whether it helps you be more creative is harder to tell. I got a bit of milage creativitywise out of Comedy Writing Secrets, which I hear is well-regarded among professional humor writers. I wasn’t very diligent about the exercises, or I might have gotten more out of it.
Regarding LW-like thought and creativity, I’m reading through Minsky’s Society of Mind and the Puzzle Principle section talks about machines and creativity:
And he goes into a bit more detail.
My thoughts on this, cribbed more or less directly from my notes:
I think there’s an equivocation in common uses of the word “creativity.” There’s one sense, generally used by technical people, that means something like the ability to make intuitive leaps when solving a problem. Then there’s the other sense, which is probably closer to what most people mean, the attributive sense. That is, someone might be a creative person, meaning they make those intuitive leaps, yes, but they also have certain stereotypical personality traits; they’re quirky, they dress in non-conformitive ways, they’re artsy, emotional. And so on.
So Minsky’s answer doesn’t really adequately address what most people mean when they say you can’t program a machine to be creative.
But of course you can, and we’re getting better and better at this.