This was an extremely enjoyable read.
Good fun.
This was an extremely enjoyable read.
Good fun.
I met an ex-Oracle sales guy-turned medium-bigwig at other companies once.
He justified it by calling it “selling ahead”, and it started because the reality is that if you tell customers no, you don’t get the deal. They told the customers they would have requested features. The devs would later get notice when the deal was signed, and no one on management ever complained, and everyone else on his “team” was doing it.
Part of the problem with verifying this is that the number of machine learning people who got into machine learning due to lesswrong. We need more machine learning people whom were able to come to doom conclusions of their own accord, independent of hpmor etc, as a control group.
As far as I can tell, the number worried about doom overlap 1:1 with lesswrong posters/readers, and if it was such a threat, we’d expect there to be some number of people coming to the conclusions independently/of their own accord.
This post was inspired by parasitic language games.
I wonder how long it’s going to be until you can get an LLM which can do the following with 100% accuracy.
I don’t care about the ai winning or losing, in fact, I would leave that information to the side. I don’t care if this test is synthetic, either. What I want is:
The ai can play chess in a way that can play as normal humans do—obeys rules, uses pieces normally, etc.
The ai holds within it the entire state of the chess board, and doesn’t need a context in order to keep within it the entire state of the board. (ie, it’s playing blind chess and doesn’t get the equivalent of notecards. The memory is not artificial memory)
The post I’m working on tries to call out, explicitly, long-term memory without “hacks” like context hacks or databases/lookup hacks.
Most ai groups seem to not be releasing their LLMs, and so the incentive on this kind of test would be to defect, like we saw with the DOTA 2, Alphastar and cohort, where they all used significant shortcuts so they could get a spicy paper title and/or headline. Neutral third parties should also be allowed to review the implemented ai codebase, even if the weights/code aren’t released.
Does the data note whether the shift is among new machine learning researchers? Among those who have a p(Doom) > 5%, I wonder how many would come to that conclusion without having read lesswrong or the associated rationalist fiction.
As a relatively new person to lesswrong, I agree.
The number of conversations which I’ve read which end in either party noticeably updating one way or the other have been relatively rare. The one point I’m not sure if I agree with is being able to predict a particular disagreement is a problem?
I suppose being able to predict the exact way in which your interlocutors will disagree is the problem? If you can foresee someone disagreeing in a particular way, and then accounting for it in your argument, and then they disagree anyway, in the exact way you tried to address, that’s generally just bad faith.
(though sometimes I do skim posts, by god)
I don’t think there’s any place quite like lesswrong on the entire internet. It’s a lot of fun to read, but it tends to be pretty one-note, and even if there is discord in lesswrong’s song, it’s far more controlled, Eru Illuvitar’s hand can yet be felt, if not seen. (edit: that is to say, it’s all the same song)
For the most part, people are generally-tolerant of Christians. There is even a Catholic who teaches (taught?) at the Center For Applied Rationality, and there’s a few other rationalist-atheists who hopped to christianity, though I can’t remember them by name.
Whether or not it’s the place for you, I think you’ll find that there’s more pop!science, and if you are a real physicist, there’s more and more posts where people who do not know physics will act like they do, and correcting them will be difficult, and it depends on if you can tolerate that.
I think the lesswrong community is wrong about x-risk and many of the problems about ai, and I’ve got a draft longform with concrete claims that I’m working on...
But I’m sure it’ll be downvoted because the bet has goalpost-moving baked in, and lots of goddamn swearing, so that makes me hesitant to post it.
(Porting and translating comment here, because this post is great):
Goddamn I wish people would just tell me when the fuck they’re not willing to fucking budge. It’s a fucking waste of time for all parties if we just play ourselves to exhaustion. Fuck, it’s okay to not update all at once, goddamn Rome wasn’t built in a day.
My comment may be considered low effort, but this is a fascinating article. Thank you for posting it.
Perhaps “term” is the wrong, ahem, term.
Maybe you want “metrics”? There’s lots of non-GDP metrics that could be used to track ai’s impact on the world.
Instead of the failure mode of saying “well, GDP didn’t track typists being replaced with computers,” maybe the flipside question is “what metrics would have shown typists being replaced?”
I wasn’t aware that Eliezer was an experienced authority on SOTA LLMs.
Thanks. fyi, i tried making the post i alluded to:
Something possibly missing from the list, is breadth of first-hand experience amidst other cultures. Getting older and meeting people and really getting to know them in such a short lifespan is really, really hard!
And I don’t just mean meeting people in the places we already live. Getting out of our towns and countries and living in their worlds? Yeah you can’t really do that. Sure, you might be able to move to <Spain> or <the Phillippines> for a couple years, but then you come home.
It’s not just death here, but the breadth of experiences we can even have is limited, so our understanding of others, the problems they face, and thus the solutions we can come up with, often wind up with terrible failures.
Left as comment, rather than answer because it feels tangential.
I read it as “People would use other forms of money for trade if the government fiat ever turns into monopoly money”
I propose another discussion norm: committing to being willing to have a crisis of faith in certain discussions and if not, de-stigmatizing admitting when you are, in fact, unwilling to entertain certain ideas or concepts, and participants respecting those.
While I find the Socrates analogy vivid and effective, I propose considering critics on posts under the same bucket as lawyers. Where Socrates had a certain set of so-called principles—choosing to die for arbitrary reasons, I find that most people are not half as dogmatic as Socrates, and so the analogy/metaphor seems to slip short.
While my post is sitting at negative two, and no comments or feedback… Modeling commenters as if they were lawyers might be better? When the rules lawyers have to follow shows up, lawyers (usually) do change their behavior, though they naturally poke and prod as far as they can within the bounds of the social game that is the court system.
But also, everyone who is sane hates lawyers.
That framing makes sense to me.
Feel free to delete this if it feels off-topic, but on a meta note about discussion norms, I was struck by that meme about C code. Basically, the premise that there is higher code quality when there is swearing.
I was also reading discussions in the linux mailinglists- the discussions there are clear, concise, and frank. And occasionally, people still use scathing terminology and feedback.
I wonder if people would be interested in setting up a few discussion posts where specific norms get called out to “participate in good faith but try to break these specific norms”
And people play a mix-and-match to see which ones are most fun, engaging and interesting for participants. This would probably end in disaster if we started tossing slurs willy-nilly, but sometimes while reading posts, I think people could cut down on the verbiage by 90% and keep the meaning.