On this topic you might be interested in skimming Zvi’s three dating roundup posts. Here’s the third, which covers dating apps in the first two headings, but all three posts mention them a lot (Ctrl + F “dating app”).
MondSemmel
[Question] Have any parties in the current European Parliamentary Election made public statements on AI?
Or if you’re instead in the mode of deciding what to do next, or making a schedule for your day, etc., then that’s different, but working memory is still kinda irrelevant because presumably you have your to-do list open on your computer, right in front of your eyes, while you do that, right?
Whenever I look at a to-do list, I’ve personally found it noticeably harder to decide which of e.g. 15 tasks to do, than which of <10 tasks to do. And this applies to lists of all kinds. A related difficulty spike appears once a list no longer fits on a single screen and requires scrolling.
If you find that you’re reluctant to permanently give up on to-do list items, “deprioritize” them instead
I’ve found that there’s value in having short to-do lists, because short lists fit much better into working memory and are thus easier to think about. If items are deprioritized rather than getting properly deleted from the system, this increases the total number of to-dos one could think about. On the other hand, maybe moving tasks to offscreen columns is sufficient to get them off one’s mind?
(Granted, lots of text editors have affordances for going through a document’s history to retrieve deleted text. But I find them a hassle to use.)
It seems to me like a both easier and more comprehensive approach would be to use a text editor with proper version control and diff features, and then to name particular versions before making major changes.
From here:
Profit Participation Units (PPUs) represent a unique compensation method, distinct from traditional equity-based rewards. Unlike shares, stock options, or profit interests, PPUs don’t confer ownership of the company; instead, they offer a contractual right to participate in the company’s future profits.
In the war example, wars are usually negative sum for all involved, even in the near-term. And so while they do happen, wars are pretty rare, all things considered.
Meanwhile, the problem with AI development is that that there are enormous financial incentives for building increasingly more powerful AI, right up to the point of extinction. Which also means that you need not some but all people from refraining from developing more powerful AI. This is a devilishly difficult coordination problem. What you get by default, absent coordination, is that everyone races towards being the first ones to develop AGI.
Another problem is that many people don’t even agree that developing unaligned AGI likely results in extinction. So from their perspective, they might well think they’re racing towards a utopian post-scarcity society, while those who oppose them are anti-progress Luddites.
You might appreciate the perspective in the short post Statistical models & the irrelevance of rare exceptions. (I previously commented something similar on a post by Duncan.)
In case you haven’t seen it, you might like dynomight’s recent post Thoughts on seed oil.
On what research policymakers actually need
Flippant response: people pushing for human extinction have never been dead under it, either.
Thanks for writing this!
Typos & edit suggestions, for the post at dynomight.net, not in order: (feel free to ignore)
Stephan Guyunet → Stephan Guyenet
The fourth mechanism is saturated fat free radicals. → saturated fat causing / producing free radicals (?)
When humans build complex systems we modularize, → systems, we modularize
That might suggest that that seed oils → That might suggest that seed oils
Had cholesterol that looked slightly better by most measures → Had cholesterol that looked slightly better by most measures.
I don’t see this as a conclusive, → I don’t see this as a conclusive argument,
the experimental evidence suggest → the experimental evidence suggests
rich in lionleic acid. → linoleic
These “inconvenient” results were mostly ignored until 43 years later, Ramsden et al. (2016) came around → later, when Ramsden
meaning the average subject was only in the trial for only one year. → for one year
There’s a whole sub-debate debate about → sub-debate about
despite eating lots saturated-fat-rich croissants or whatever. → lots of
looked at trials of trials that increased linoleic acid or omega-6 fats → looked at trials that
metabolism of lionoleic acid → linoleic
low levels of LA consumption (Liou and Innis (2009). → (missing closing parenthesis)
with a long term trend of people → long-term
The leftmost part of the plot is an estimate for men born in 1882 in 1932 (when they were 50) → for men born in 1882 living in 1932
But the Citadel, if anything is decreasing → But for the Citadel, if anything BMI is decreasing
hunter-gathers → hunter-gatherers
f some mechanism turned out to part of a larger, more complicated story. → turned out to be part
This book chapter and this paper, maybe?
Thanks for writing this post, I really liked it!
Due to the high upvotes, I figure it has a decent chance to feature in the LW Review for 2024, so I figured I’d make some typo & edit suggestions. Feel free to ignore.
An approach that may not be well received in all social circles, but probably in those closer to lesswrong, is → An approach that may not be well received in all social circles, but probably is well received in those closer to LessWrong, is [I feel like an “is” is missing in the middle, but this edit makes the sentence a bit awkward due to the “lesswrong, is” follow-up]
in exchange for the utility you get out of it yourself → in exchange for the utility you yourself get out of smoking
The idea is that when when people make some decision → The idea is that when
whenpeople make some decisioninstead of deciding for the other option. → instead of deciding on the other option.
even though that would not be expected thing to do. → even though that would not be the expected thing to do.
opt-in style questions → opt-in-style questions
Although in the end this post is not meant to be normative and make any such should-claims. → Although in the end this post is not meant to be normative and not meant to make any such should-claims.
So these songs have now all gotten at least 1k views within 9 days. That seems like a great performance, right? I wonder where all the traffic came from. Besides this LW post, presumably the recent ACX link also helped a ton. But I do also wonder which fraction of the traffic came organically via the Youtube algorithm itself.
No, those are clickbait. 4 is straightforwardly misleading with the meaning of the word “hunt”. 2 and 3 grab attention via big dollar numbers without explaining any context. And 1 and 5 are clickbait but wouldn’t be if an arbitrary viewer could at any time actually do the things described in the titles, rather than these videos being about some competition that’s already happened.
Whereas a title saying “Click on this blog post to win $1000” wouldn’t be clickbait if anyone could click on the blog post and immediately receive $1000. It would become clickbait if it was e.g. a limited-time offer and expired, but would not be clickbait if the title was changed at that point.
Have you or anyone else on the LW team written anywhere about the effects of your new rate-limiting infrastructure, which was IIRC implemented last year? E.g. have some metrics improved which you care about?
I don’t really agree with this definition of clickbait. A title that merely accurately communicates what the post is about, is usually a boring one and thus communicates that the post is boring and not worth reading. Also see my comment here. Excerpt:
Similarly, a bunch of things have to line up for an article to go viral: someone has to click on your content (A), then like it (B), and then finally follow a call to action like sharing it or donating (C). From this perspective, it’s important to put a significant fraction of one’s efforts on quality (B) into efforts on presentation / clickability (A).
(Side note: If this sounds like advocacy for clickbait, I think it isn’t. The de facto problem with a clickbaity title like “9 Easy Tips to Win At Life” is not the title per se, but that the corresponding content never delivers.)
Maybe the takeaway is that it’s hard to build support behind the prevention of risks that 1. are technical/abstract and 2. fall on the private sector and not individuals 3. have a heavy right tail. Given these challenges, organizations that find prevention inconvenient often succeed in lobbying themselves out of costly legislation.
Which is also something of a problem for popularising AI alignment. Some aspects of AI (in particular AI art) do have their detractors already, but that won’t necessarily result in policy that helps vs. x-risk.
This post seems like a duplicate of this one.