Is it addictive? Can you still sleep (as well as before) without it?
ShardPhoenix
This is interesting but would benefit from more citations for claims and fewer personal attacks on Eliezer.
A hard thing about trying to be transparent about our moderation decisions and actions is that this also requires publicly calling out a user or their content. So you get more transparency but also more embarrassment. I don’t have any good solution to this.
Maybe you could not display usernames in the rejected posts section (though this might conflict with transparency if a user feels they are being personally targeted).
I sometimes see posts like this that I can’t follow in depth due to insufficient math ability, but skimming them they seem important-if-true so I upvote them anyway. I do want to encourage stuff like this but I’m concerned about adding noise through not-fully-informed voting. Would it be preferable to only vote on things I understand better?
This whole drama is pretty TL;DR but based on existing vibes I’d rather the rules lean (if a lean is necessary) in favor of overly disagreeable gadflys rather than overly sensitive people who try to manipulate the conversation by acting wounded.
The
' petertodd' is
completions have a structure reminiscent of Chuck Norris jokes, only a bit darker. I think a few of them are actually Chuck Norris jokes with the name changed—eg “Chuck Norris doesn’t hunt, he waits”.
>Also, I’m sad whenever people look for an alternative place to post things. In my ideal (though likely unachievable) world, anyone could post anything to LessWrong and the site infrastructure would handle visibility perfectly so that things were only viewed by people wanted to see them (and in priority order of what they want to see).
This sounds nice but if taken far enough there’s a risk of fragmenting the site community into a bunch of partially overlapping sub-communities, a la the chaos of Twitter.
This question appears to be structured in such a way as to make it very easy to move the goalposts.
If he thinks AI interpretability work as it exists isn’t helpful he should say so, but he shouldn’t speak as though it doesn’t exist.
Eliezer’s repeated claim that we have literally no idea about what goes on in AI because they’re inscrutable piles of numbers is untrue and he must know that. There have been a number of papers and LW posts giving at least partial analysis of neural networks, learning how they work and how to control them at a fine grained level, etc. That he keeps on saying this without caveat casts doubt on his ability or willingness to update on new evidence on this issue.
Recall that the Python primitive “sort” corresponds to a long segment of assembly code in the compiler.
This analogy is a bit off because Python isn’t compiled, it’s interpreted at runtime. Also, compilers don’t output assembly language, they output binary machine code (assembly is what you use to write machine code by hand, basically). So it would be better to talk about C and machine code rather than Python and assembly.
Aside from that I thought that was a very interesting post with some potentially powerful ideas. I’m a little skeptical of how practical this kind of prompt-programming could be though because every new LLM (and probably every version of an LLM, fined-tuned or RLHF-ed differently) is like a new CPU architecture and would require a whole new “language/compiler” to be written for it. Perhaps these could be adapted in the same way that C has compilers for various CPU architectures, but it would be a lot of work unless it could be automated. Another issue is that the random nature of LLM evaluation means it wouldn’t be very reliable unless you set temperature=0 which apparently tends to give weak results.
>The consequence is the higher performance of programmers, so more tasks can be done in a shorter time so the market pressure and market gap for employees will fall. This means that earnings will either stagnate or fall.
Mostly agree with your post. Historically higher productivity has generally lead to higher total compensation but how this affects individuals during the transition period depends on the details (eg how much pent-up demand for programming is there?).
An interesting theory that could use further investigation.
For anyone wondering what’s a Waluigi, I believe the concept of the Waluigi Effect is inspired by this tongue-in-cheek critical analysis of the Nintendo character of that name: https://theemptypage.wordpress.com/2013/05/20/critical-perspectives-on-waluigi/ (specifically the first one titled I, We, Waluigi: a Post-Modern analysis of Waluigi by Franck Ribery)
You’re probably right, but a potential contrary take is that learning to emotionally cope with loss and frustration is part of the purpose of the game.
How should AI systems behave, and who should decide? [OpenAI blog]
I run a relevant meetup but TBH not sure what the value of this would be (had the same thought about the global one so didn’t apply for that either). Our meetup isn’t particularly formal or serious so going on a kind of paid “business trip” for it seems a bit odd or wasteful. What’s the intention?
I enjoyed this post but the first example is probably too long and technical for anyone not familiar with poker.
>I view it as highly unlikely (<10%) that Putin would accept “Vietnam” without first going nuclear, because it would almost certainly result in him being overthrown and jailed or killed.
Not obvious to me that this is true. If it was, I would have expected more escalation/effort from Russia already by this point.
How much difference would using tactical nuclear weapons actually make?
Good review. From what I’ve read, the root of the great divergence is the Catholic church’s ban on cousin marriage (for its own reasons), which supposedly lead to less clannishness and a higher-trust society in much of Western Europe.