I am Issa Rice. https://issarice.com/
riceissa
I just tried doing this in a post, and while the images look fine in the editor, they come out huge once the post is published. Any ideas on what I can do to fix this? (I don’t see any option in the editor to resize the images, and I’m scared of converting the post to markdown.)
Discovery fiction for the Pythagorean theorem
Some thoughts in response:
I agree that it’s better to focus on ideas instead of people. I might have a disagreement about how successfully LessWrong has managed this, so that from your perspective it looks like this page is pushing the status quo toward something we don’t want vs looking from my perspective like it’s just doing things more explicitly/transparently (which I prefer).
I agree that writing about people can be dicey. I might have disagreement about how well this problem can be avoided.
Maybe I’m misunderstanding what you mean by “defensible style”, but I’m taking it to mean something like “obsession with having citations from respected sources for every assertion, like what you see on Wikipedia”. So the concern is that once we allow lots of pages about people, that will force us to write defensibly, and this culture will infect pages not about people to also be written similarly defensibly. I hadn’t thought of this, and I’m not sure how I feel about it. It seems possible to have separate norms/rules for different kinds of pages (Wikipedia does in fact have extra rules for biographies of living persons). But I also don’t think I can point to any particularly good examples of wikis that cover people (other than Wikipedia, which I guess is sort of a counterexample).
I agree that summarizing his ideas or intellectual culture would be better, but that takes way more work, e.g. to figure out what this culture is/how to carve up the space, how to name it, and figuring out what his core ideas are.
Currently the wiki has basically no entries for people (we have one for Eliezer, but none for Scott Alexander or Lukeprog for example)
There do seem to be stubs for both Scott Alexander and Lukeprog, both similar in size to this Vervaeke page. So I think I’m confused about what the status quo is vs what you are saying the status quo is.
I’m not sure what cluster you are trying to point to by saying “wiki pages like this”.
For this page in particular: I’ve been hearing more and more about Vervaeke, so I wanted to find out what the community has already figured out about him. It seems like the answer so far is “not much”, but as the situation changes I’m excited to have some canonical place where this information can be written up. He seems like an interesting enough guy, or at any rate he seems to have caught the attention of other interesting people, and that seems like a good enough reason to have some place like this.
If that’s not a good enough reason, I’m curious to hear of a concrete alternative policy and how it applies to this situation. Vervaeke isn’t notable enough to have a page on Wikipedia. Maybe I could write a LW question asking something like “What do people know about this guy?” Or maybe I could write a post with the above content. A shortform post would be easy, but seems difficult to find (not canonical enough). Or maybe you would recommend no action at all?
Thanks!
I tried creating a wiki-tag page today, and here are some questions I ran into that don’t seem to be answered by this FAQ:
Is there a way to add wiki-links like on the old wiki? I tried using the [[double square brackets]] like on MediaWiki, but this did not work (at least on the new editor).
Is there a way to quickly see if a wiki-tag page on a topic already exists? On the creation page, typing something in the box does not show existing pages with that substring. What I’m doing right now is to look on the all tags page (searching with my browser) and also looking at the wiki 1.0 imported pages list and again searching there. I feel like there must be a better way than this, but I couldn’t figure it out.
Is there a way to add MediaWiki-like <ref> tags? Or is there some preferred alternative way to add references on wiki-tag pages?
The Slack invite link seems to have expired. Is there a new one I can use?
That makes sense, thanks for clarifying. What I’ve seen most often on LessWrong is to come up with reasons for preferring simple interpretations in the course of trying to solve other philosophical problems such as anthropics, the problem of induction, and infinite ethics. For example, if we try to explain why our world seems to be simple we might end up with something like UDASSA or Scott Garrabrant’s idea of preferring simple worlds (this section is also relevant). Once we have something like UDASSA, we can say that joke interpretations do not have much weight since it takes many more bits to specify how to “extract” the observer moments given a description of our physical world.
Thanks! That does make me feel a bit better about the annual reviews.
I see, that wasn’t clear from the post. In that case I am wondering if the 2018 review caused anyone to write better explanations or rewrite the existing posts. (It seems like the LessWrong 2018 Book just included the original posts without much rewriting, at least based on scanning the table of contents.)
This is a minor point, but I am somewhat worried that the idea of research debt/research distillation seems to be getting diluted over time. The original article (which this post links to) says:
Distillation is also hard. It’s tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery.
I think the kind of cleanup and polish that is encouraged by the review process is insufficient to qualify as distillation (I know this post didn’t use the word “distillation”, but it does talk about research debt, and distillation is presented as the solution to debt in the original article), and to adequately deal with research debt.
There seems to be a pattern where a term is introduced first in a strong form, then it accumulates a lot of positive connotations, and that causes people to stretch the term to use it for things that don’t quite qualify. I’m not confident that is what is happening here (it’s hard to tell what happens in people’s heads), but from the outside it’s a bit worrying.
I actually made a similar comment a while ago about a different term.
So the existence of this interface implies that A is “weaker” in a sense than A’.
Should that say B instead of A’, or have I misunderstood? (I haven’t read most of the sequence.)
Have you seen Brian Tomasik’s page about this? If so what do you find unconvincing, and if not what do you think of it?
Would this work across different countries (and if so how)? It seems like if one country implemented such a tax, the research groups in that country would be out-competed by research groups in other countries without such a tax (which seems worse than the status quo, since now the first AGI is likely to be created in a country that didn’t try to slow down AI progress or “level the playing field”).
Is there a way to see all the users who predicted within a single “bucket” using the LW UI? Right now when I hover over a bucket, it will show all users if the number of users is small enough, but it will show a small number of users followed by ”...” if the number of users is too large. I’d like to be able to see all the users. (I know I can find the corresponding prediction on the Elicit website, but this is cumbersome.)
Ok. Since visiting your office hours is somewhat costly for me, I was trying to gather more information (about e.g. what kind of moral uncertainty or prior discussion you had in mind, why you decided to capitalize the term, whether this is something I might disagree with you on and might want to discuss further) to make the decision.
More generally, I’ve attended two LW Zoom events so far, both times because I felt excited about the topics discussed, and both times felt like I didn’t learn anything/would have preferred the info to just be a text dump so I could skim and move on. So I am feeling like I should be more confident that I will find an event useful now before attending.
Is any of the stuff around Moral Uncertainty real? I think it’s probably all fake, but if you disagree, let’s debate!
Can you say more about this? I only found this comment after a quick search.
Oh :0
Thanks, that worked and I was able to fix the rest of the images.