Somewhere, recently, I saw someone comment almost in passing that grad school shouldn’t cost anything. I can’t find the source now. Maybe someone can clarify if that’s a serious claim? I’ve been under the impression for a while that grad school and academia would be an awfully expensive way to acquire the prerequisite knowledge for AI safety work.
please assign the issue to yourself in github so people know someone is working on it.
please assign the issue to yourself in github so people know someone is working on it.
It doesn’t look like users can assign issues to themselves without being invited to be a contributor.
Now that I have a copy of the code running on my local machine, I was thinking of grabbing an issue to work on. (I can’t promise any commitment level beyond one issue yet.) I’m trying to be thorough reading what docs there are, and I’ve come across the contributing guidelines which says to check out a roadmap (a Trello board) and join a Slack channel before working on anything. The Trello board doesn’t make much sense to me and I’m not sure if either of these instructions are still important to follow, or if I really should just claim an unassigned issue with a ‘good first issue’ or ‘help wanted’ tag.
Adding the missing line fixed it! I have Lesswrong2 running successfully on Windows at http://localhost:3000/ now.
This might be a good use case for someone to create a Docker image (or some other container) that has a development environment that just works for new users.
If you try again, I think you can avoid needing bash. See my comment here.
I got further than gjm reported.
I needed to:
Install Node (That link goes to the exact version listed in .nvmrc)
Install Python 2.7.whatever since I only had Python 3 before this.
Install Visual C++ 2015 Tools for Windows Desktop—This was the weirdest one, but after this, npm install works without error.
I learned that npm start is unnecessary. It runs a bash script that
1. checks meteor is installed
2. creates the file settings.json if it doesn’t exist by copying from sample_settings.json
3. runs command meteor—settings settings.json
If you have meteor installed and create settings.json yourself, you can skip npm start and just run
I verified that I was able to successfully build and run the VulcanJS starter example by doing:
cd Vulcan-Starter code repo
run npm install
manually copy sample_settings.json to settings.json
run meteor—settings settings.json
I then had a website running at http://localhost:3000/
Additional info in the VulcanJS docs includes:
(A note for windows user: While running npm install you might get error regarding node-gyp and bcrypt package installation. This is because you need windows-build-tool for node-gyp installation which is required for bcrypt installation.)
This is what the C++ dev tools fixed. I don’t know if something like npm install -g windows-build-tool would have fixed this or not. I didn’t read this until afterwards.
npm install -g windows-build-tool
Note that you can also start the app with:
All npm start does is run the above command, while also checking for the presence of settings.json first and creating it for you if missing.
But for LessWrong 2 I am stuck at an error that looks like https://pastebin.com/M1vqTMZd
I think I’m stumped for now. I can clearly get the Vulcan Starter project running, just not LessWrong 2.
Your comment was definitely worthwhile for me. Thanks to your very strong recommendation (and the fact that it doesn’t look like it’ll take much time), I’m going to check out the fast.ai course very soon. I’ll be referencing back to this comment to check out your other recommendations in the future too. Thank you.
I use Windows and intend to try to get this running on my machine.
Also, fyi, the first link to the GitHub repo in the tl;dr doesn’t point to the right place.
It’s always a bit amazing to me how much I don’t have to remember to be able to work on big software projects. It’s like as long as I know what’s possible, and when it’s applicable, it takes only moments to search for and zero in on specific implementation details.
And yet in this situation, some anxious voice in my head cries, “But do you really know what you’re doing if you can’t remember every detail?!”
So thank you for reassurance on that. Also, thank you for the recommendations!
I’d call delicious food a lotus for me. Sometimes it feels so easy for me to fall into addictions that I could get addicted to cereal.
Palatable food may indeed highjack things in our brains leading to negative consequences.
I’ve also personally found that always eating delicious foods will make me subconsciously start looking for food in moments of boredom.
I do think it’s important to experience pleasures in life, and delicious food is a great treat, but like too many things in our lives, our food supply is being engineered to superstimulus levels and caution is totally warranted.
Intuitively seeing things as being like the pie graph is why the birds example for scope insensitivity doesn’t feel like a case where I should try to do anything differently. Maybe I only have an ~$80 budget to care about birds because I can’t smash a bigger slice into my pie of caring.
This might read like confident advice, but it’s mostly just the strategy I’ve been using because it seems sensible to me.
For any of these topics with dedicated books (especially ones recommended as high quality), there will be proofs presented along the way while reading them. Don’t just read the proofs. Try pausing before you read/understand the proof and try to work out how you would prove it yourself. Then read (and maybe re-read) until you think you get it, and try to prove it again without looking back at the material.
Keeping a list of these exercises might be handy to test yourself with later.
Also keep notes as you go about anything you find remotely confusing. Follow up on the confusion.
This doesn’t tell you that you’ve “mastered the topic,” but mastery comes in building blocks of deliberate practice.
I think that satisficing is sometimes the right way to approach tasks. I would classify a whole slew of tasks as not super important but still needing to be done. It isn’t always worth it to pour energy into everything you do. As someone who errs on the side of perfectionism too often, I find the concept of succeeding with no wasted motion to be a sanity saver.
I feel similarly to what you expressed in your first paragraph, and somewhat similar to your third. When I realize certain people can’t stand being alone, I imagine them as someone who has no idea what to do with themselves. I feel like my brain is highjacked if I’m not given enough alone time to process my thoughts, and that parts of me are never fully expressed until I am alone.
Maybe this means I need to improve my social circle?
Your observation of the Buddhist “no self” claims seems to me like a misunderstanding due to different definitions of self. After much staring at these claims on my part, I think what they are (rightly) saying is that there is no single “executive” module in charge in our brains, and that our impression of a unified self is an illusion that rises from a bunch of separate modules.
After reading your Mythic Mode post, and before seeing this comment, I was trying to think of a possible mythic mode name for this other than Omega. Hermaeus Mora, a Lovecraftian-like being from the Elder Scrolls video game series, overpowers any other ideas in my head:
Hermaeus Mora, also known as Hoermius, Hormaius, Hermorah, Herma Mora, and The Woodland Man is the Daedric Prince of knowledge and memory; his sphere is the scrying of the tides of Fate, of the past and future as read in the stars and heavens. He is not known for being good or evil; he seems to be the keeper of both helpful and destructive knowledge
He/it also looks like a bunch of tentacles, which is sort of web like.
I don’t think this is remotely a name that could spread, but when I recalled that I thought of him as Herman when I played the game, I became very amused at the idea of calling “The Intelligent Social Web” by the name Herman.
This sentiment seems opposed to what others have expressed. Mixed messaging is part of why I’ve been confused.
Aspiring rationalists could benefit from a central place to make friends with and interact with other rationalists (that isn’t Facebook) and welcoming 2) seems like it would be a way to incentivize community, while hopefully the Archipelago model limits how much this could lower LW’s main posts’ standards.
I notice that when I write about rationality adjacent things, it most often comes out as a story about my personal experiences. It isn’t advice or world changing info for others, but it is an account of someone who is thinking about and trying to apply rationality in their life. I predict these stories aren’t totally useless but that they may not be liked or seen as typical LW fair.
I’ll admit the link I see between my last two paragraphs. I would like to be less of a silent watcher and make friends in the community, but my natural style of writing is experiential and mostly doesn’t feel like LessWrong has felt in the past.
I’m confused about what sort of content belongs on LW 2.0, even in the Archipelago model.
I’ve been a lurker on LW and many of the diaspora rational blogs for years, and I’ve only recently started commenting after being nudged to do so by certain life events, certain blog posts, and the hopeful breath of life slightly reanimating LessWrong.
Sometimes I write on a personal blog elsewhere, but my standards are below what I’d want to see on LW. Then again, I’ve seen things on LW that are below my standards of what I expect on LW.
I’ve seen it said multiple times that people can now put whatever they want on their personal LW spaces/blogposts, and that’s stressed again here. But I still feel unsettled and like I don’t really understand what this means. Does it mean that anyone on the internet talking about random stuff is welcome to have a blog on LW? Does it mean well known members are encouraged to stick around and can be off the rationality topic in their personal blogposts? How about the unknown members? How tangential can the topic be from rationality before it’s not welcome?
Could a personal post about MealSquares and trading money for time flip my modest amount of Karma into the negative and make it harder for me to participate in conversations in the future? Is part of the intent behind the Archipelago model to bring in this kind of content in addition to the well known names? I can’t tell.
I’m a software engineer and my degree in college required a good chunk of advanced math. I am currently in the process of trying to relearn the math I’ve forgotten, plus some, so I’m thinking that if this analysis/algebra dichotomy points at a real preference difference, knowing which I am might help me choose more effective learning sources.
But I find it hard to point to one category or another for most aspects. Even the corn test is inconclusive! (I agree that it sounds more like an analysis thing to do.)
I love the step-by-step bits of algebra and logic, but I also love geometry.
I think I do tend to form an “idiosyncratic mental model of specific problems.” As I come to understand problems more, I feel like they have a quality or character that makes them recognizable to me. I did best in school when teaching myself from outside sources and then using the teacher’s methods to spot check and fill in gaps in my models.
I think object oriented programming is very useful, and functional programming is very appealing.
I use(d) vi/vim because that’s what I know well enough to function in. I barely touched emacs a couple times, was like, “dafuq is this?” and went back to vim. I never gave emacs a fair chance.
I think I lean towards ‘building up’ my understanding of things in chunks, filling in a bigger picture. But the skill of ‘breaking down’ massive concepts into bite-sized chunks seems like an important way to do this!
My tentative self diagnoses is that I have a weak preference for analysis. Reading more of the links in the OP might help me confirm this.