LessWrong Team
I have signed no contracts or agreements whose existence I cannot mention.
LessWrong Team
I have signed no contracts or agreements whose existence I cannot mention.
Still feels hard to believe. The most viewed YouTube video has 15B views and I don’t think there are that many with over a billion. But you think one specific character.ai persona has nearly a billion conversations started?
https://en.wikipedia.org/wiki/List_of_most-viewed_YouTube_videos
I see the 864M interactions, which I don’t think means open conversations.
Curated. There’s an amusing element here: one of the major arguments for concern about powerful AI is how things will fail to generalize out of distribution. There’s a similarish claim here – standard economics thinking not generalizing well to the unfamiliar and assumption-breaking domain of AI.
More broadly, I’ve long felt many people don’t get something like “this is different but actually the same”. As in, AI is different from previous technologies (surprising!) but also fits broader existing trendlines (e.g. the pretty rapid growth of humanity over its history, this is business as usual once you zoom out). Or the different is that there will be something different and beyond LLMs in coming years, but this is on trend as LLMs were different from what came before.
This post helps convey the above. To the extent there are laws of economics, they still hold, but AI, namely artificial people (from an economic perspective at least) require non-standard analysis and the outcome is weird and non-standard too compared to the expectations of many. All in all, kudos!
Thanks for the extra context. I mean, if we can get our design right then maybe we can inspire the rest ;)
There’s a new experimental feature of React, <Activity>, that’d let us allow for navigation to a different page and then returning to the feed without losing your place. I haven’t tried to make it work yet but it’s high on the to-do list.
Oh no, that sounds no good at all. You might be relieved to I have on my to do to to explore overlay alternative design.
I actually have a prototype of “Claude/LLM integrated into LessWrong” that makes it easy to load LW content into the context. I could enable that for you but it’s actually on Claude 3.5, iirc. Should maybe update it, check that it still works well, and let people try it out.
Making it easy to export content though is an alternative.
Haha. I kinda woulda liked to have the posts without the karma and be asked to estimate the karma for each...I feel like I would have gotten these directionally right but you know, wasn’t an advanced prediction ;)
I’m curious if they persist in seeming that way. I’ve realized that for me there are people who felt that for a slice of time, but later didn’t, for no clear reason.
Six years ago we introduced Shortform (later renamed Quick Takes) to LessWrong. Here’s a meme-format thing we made at the time. How do people reckon it’s gone? h/t @Raemon
Cheers! I think we’ve thought about this but I’ll raise with the team again.
Reading A Self-Dialogue on The Value Proposition of Romantic Relationships, had the following thought:
There’s often a “proposition” and separately “implications of the proposition”. People often deny a proposition in order to avoid the implications, e.g. Bucket Errors
I wonder how much “you’re perfect as you are” is an instance. People need to say or believe this to avoid some implication, but if you could just avoid the implication it would be okay to be imperfect.
Not having played this side of it so can’t say I’ve tried this, but I’d try putting something in your bio that solicits useful filtering info at the start, like “if you ping me or we match, please start by telling me/answering...”, and try to find a prompt there that’s informative for what you’re looking for. It might be as dumb as reading comprehension or more like “what kind of relationship are you hoping for?” or “life values”. But I would iterate on it. An unintuitive thing to do would be then to also proceed further with people who give answers you don’t like much, to see if you’re getting false negatives because the early screen is bad.
Another idea is to put in your bio stuff that matters to/about you (that’s important) but would dissuade suitors and let them filter themselves out earlier.
My personal favorite filter is writing though. You might not get many takers but “link to me to your blog” is a way to get a lot more info about someone.
I just coded a fix for this, will get deployed soon.
I applaud you taking this seriously and saying the hard critical things. It is concerning and I do worry about the sign of everything we do.
I think there might exist people who feel that way (e.g. reactors above) but Yudkowsky/Soares, the most prominent doomers (?), are on the record saying they think alignment is in principle possible, e.g. opening paragraphs of List of Lethalities. It feels like a disingenuous strawman to me for Dario to dismiss doomers with.
Ah yeah, that’s pretty silly (the cards are randomly sampled currently so just silly luck to have them ordered like this).
Huh, I never scroll that way but I see what you mean. I’ll see what I can do.
Yeah, frontend web development is a lot like this. The current AIs are both stateless, already mad, and seemingly indefatigable though get a bit loopy the longer the conversation goes on. You feed them $$ and sanity points and problems get solved faster, hopefully. There’s sometimes sanity saved when debugging something gnarly, but in the regular course of things you (or at least I) am spending mine down. (It doesn’t matter how many times I ask it not to, Claude Opus 4 will revert to saying “you’re absolutely right!” about everything.) Move over autistic savant, we got alzheimers savant now.
My guess is because (particularly before the introduction of recommended posts that are older to the posts list) people would find it strange to see old posts highlighted on the frontpage.
I think they also mix between a broader metaphysical claim and claim about practical strategy that could be made without the metaphysical claim.