Coding day in and out on LessWrong 2.0
Mod note: Replaced the link to the image with the actual image, since I assigned substantial probability to that being your intention, but our editor image handling being too confusing and you giving up. But feel free to revert it if it was intentional.
Ah, yeah. I agree that the popovers could use a lot of work on mobile. I was thinking you were referring to something else. In this case I agree with you and am reasonably annoyed at how we handle this myself.
I agree with this hypothetically, but I actually think this matters a lot in practice. Browsers are actually just quite slow at doing a lot of things. As a concrete example, the Facebook web-app (which I use since I don’t want Facebook installed on my phone) is many times slower than their native app, with scrolling often being visibly sluggish, and clicking on buttons having a visible delay.
And overriden click behaviors
Given that native mobile apps are often written in C, objective-C or C++, just on a simple language level you should expect something like 3-10x speedup from the switch from an interpreted language to a compiled language (sometimes more, sometimes less, depending on the application). In general interpreted languages continue to be substantially slower (and though interpreted languages have been getting faster due to smarter execution environments and lots of fancy tricks, so have compiled languages, due to improvements in compilers).
This seems roughly right to me. We’ve tried reasonably hard (though we could definitely do better) to make the mobile version of the site work well enough to use it on phones, which I do think is important, but I don’t feel like on the margin making a native app version is that valuable. In the long run I do hope that we can make LessWrong into a proper progressive web-app, which would allow users to read articles offline, access the site through their home-screen and generally treat it like an app (though performance wise it would be worse, since progressive web-apps run on Javasript and native apps get to be written in much faster languages).
There is also one additional consideration, which is that I am worried about making LessWrong the kind of thing that is primarily a phone app. I think by their nature phones are much worse for longform content, both reading and creating it, and I think a LessWrong that was predominantly used by people on their phones would be forced to have much shorter content, and correspondingly be a lot more like the rest of the internet in it’s pressure to be short and snappy and as a result of that fail to be able to grapple with problems in any real depth. I would never write a full LessWrong post on my phone, and even notice that the comments that I write on my phone or Ipad tend to be lower quality, with fewer links to external resources, and less thought put into the formatting or content. It’s also much harder to present a lot of information on a small phone screen, making certain types of intellectual labor quite difficult (as an example, if we were developing our tagging system primarily for a phone-driven audience, I think it would have to be quite a lot simple and lose a lot of functionality, or require a lot more design effort put into it, which then corresponds to us building fewer other things).
I do think it’s likely that if things go well, we will eventually have an app, but these tradeoffs are why it hasn’t been a priority till now, and probably won’t be for a while longer.
Seems plausible to me, though I am not sure whether I would say probable. If you could do something like backprop through the human mind, then I think I can imagine outcomes similar to this:
Yeah, ok. I do think personalization is blocked on Algolia, and I didn’t really think about this as a potential solution to this (but it totally is). So yeah, maybe slighting Algolia was the right call.
I am pretty confused what any of this has to do with Algolia. The primary problem to me appears to be that we don’t actually have a large fraction of the tags categorized in the tag hierarchy displayed on the All Tags page. We could show you a copy of the tag page table, but that would omit a lot of new tags, and also probably not be dense enough. We could develop some custom UI for that menu to group them, but that’s mostly a bunch of work (and doesn’t have super much to do with Algolia).
The site search will probably always have somewhat different constraints than normal database operations (in particular if we want to stay within the autocomplete paradigm), so I don’t think anything about this would get easier if we switch away from Algolia (things like this are actually a domain where Algolia is pretty great).
Out of curiosity, what was it that convinced you this isn’t an infohazard-like risk?
Some mixture of:
I think it’s pretty valuable to have open conversation about being in an overhang, and I think on the margin it will make those worlds go better by improving coordination. My current sense is that the perspective presented in this post is reasonably frequent among people in ML, so that marginally reducing how many people believe this is not going to do much of a difference, but having good writeups that summarize the arguments seems like it has a better chance of creating some kind of common knowledge that allows people to coordinate better here.
This post more so than other posts in its reference class emphasizes a bunch of the safety concerns, whereas I expect the next post to replace it to not do that very much
Curation in particular mostly sends out the post to more people who are concerned with safety. This post found a lot of traction on HN and other places, so in some sense the cat is out of the bag and if it was harmful the curation decision won’t change that very much, and it seems like it would unnecessarily hinder the people most concerned about safety if we don’t curate it (since the considerations do also seem quite relevant to safety work).
Mod note: Fixed the broken formatting. Looks like you pasted some markdown into our WYSIWYG editor.
Promoted to curated: I think the question of whether we are in an AI overhang is pretty obviously relevant to a lot of thinking about AI Risk, and this post covers the topic quite well. I particularly liked the use of a lot of small fermi estimate, and how it covered a lot of ground in relatively little writing.
I also really appreciated the discussion in the comments, and felt that Gwern’s comment on AI development strategies in particular help me build a much map of the modern ML space (though I wouldn’t want it to be interpreted as a complete map of a space, just a kind of foothold that helped me get a better grasp on thinking about this).
Most of my immediate critiques are formatting related. I feel like the listed section could have used some more clarity, maybe by bolding the name for each bullet point consideration, but it flowed pretty well as is. I was also a bit concerned about there being some infohazard-like risks from promoting the idea of being in an AI overhang too much, but after talking to some more people about it, and thinking for a bit about it, decided that I don’t think this post adds much additional risk (e.g. by encouraging AI companies to act on being in an overhang and try to drastically scale up models without concern for safety).
Ahh, hmm. That is embarrassing. Hmm, I wish I had a better excuse for this. Hmm…
I mean, look over there a three-headed monkey!
(Will be fixed within the hour)
Edit: And it’s fixed. Sorry about that!
And here is me asking it to extend the list of all Rationality tags:
-------- My Input -------
Bayes Theorem / Bayesianism
Conservation of Expected Evidence
Value of Information
Forecasting & Prediction
Internal Double Crux
Scholarship & Learning
Heuristics and Biases
Mind Projection Fallacy
Pitfalls of Rationality
Decoupling vs Contextualizing
Philosophy of Language
Dual Process Theory (System 1 & 2
Perceptual Control Theory
Center for Applied Rationality
Memetic Immune System
Practice & Philosophy of Science
------- GPT3′s Response -------
Mind Projection Fallacy
Predictive Processing Quotes
Psychology of Science
Science & Rationality
Scientific Progress Quotes
Scientific Realism Quotes
Solomonoff’s Theory Quotes
Solomonoff’s Prior Quotes
Solomonoff’s Induction Quotes
Solomonoff’s Algorithm Quotes
Affective Death Spirals
Deductive vs Inductive
I just did it again, asking it to extend the list of all the AI tags.
------- My Input -----
Complexity of Value(8)
Fixed Point Theorems(7)
AI Boxing (Containment)(8)
Debate (AI safety technique)(9)
Humans Consulting HCH(7)
Inverse Reinforcement Learning(14)
Iterated Amplification (26)
Transparency / Interpretability(9)
AI Services (CAIS)(8)
CHAI (UC Berkeley)(12)
Alpha- (algorithm family)(10)
-------- GPT3′s Response ---------
Happiness / Life Satisfaction(3)
Evolutionary Game Theory(5)
Happiness / Life Satisfaction(3)
Human Level AI(7)
Now actually fixed (there was a typo in the URL).
Yeah, and I experimented a bunch with that (directly turning forum discussions into posts) and mostly felt like it didn’t really work that well. I mostly updated that there needs to be a larger synthesis step, though I still have some guesses for more direct things that could work. Ben spent some hours distilling the discussion and comments on a bunch of posts, which we should get around to posting (I just realized we never published them).
Re tagging: In general the tagging system that we are building has a lot in common with being a wiki (collaboratively editable descriptions, providing canonical definitions and references, and providing good summaries of existing content), and I expect it to grow into being more of a wiki over time (the tagging use-case was a specific narrow use-case that seemed easy to get traction on, but the mid-term goal is to do a lot more wiki-like stuff). And I think from that perspective it’s more clear how it helps with distillation.
I think tagging is actually pretty hard. Like, by default you get a ton of synonyms of the same concepts, and there aren’t good redirects, and the tags don’t have good descriptions, and there is lots of ambiguity, and when someone creates a new tag old posts don’t reliably get tagged. Our tagging system is also more similar to being a wiki, and in-general my research into wikis suggests that basically all functional ones are maintained by a relatively small group of highly dedicated editors, and that it generally doesn’t work to just have everyone randomly edit and add things.
Note: This is most viewed for the last 30 days I think. That post in particular tends to show up at the top every few months whenever you get closer to U.S. College application periods.
And yeah, sometimes the posts that get a ton of views really aren’t very good. For a while one of our most viewed post was one at −4 karma called “The Effects of Religion [Draft]”. It‘s been one of the reasons why I’ve been hesitant to include views as an easily accessible metric on the site, because I know how frequently it diverges from quality.
This is actually a major motivation for the wiki/tagging system we are building. Also, you might have noticed all the edited transcripts we’ve been publishing, and the debates we’ve started organizing, which is also part of this. I‘be experimented a lot over the last year with UI for potentially directly distilling comment threads, but all of them ended up too clunky and messy to ever make me excited about them, though I still have some things I might want to give a shot, but overall I am currently thinking of tackling this problem in a slightly more indirect way.