CEO at Machine Intelligence Research Institute (MIRI)
Malo(Malo Bourgon)
Now that I’ve been using 1Password for over a year (probably closer to two), it’s become indispensable.
Although it’s on the expensive side, I would say its worth every penny. 1Password can store all your passwords, as well as notes and other information like passport, bank account, credit card etcetera. It also has a password generator which I use every time I sign up to a new site/service. With 1Password on my phone, tablet, computer, and in my Dropbox, I have access to all my passwords and other important documents anywhere. They also make plugins for all major browsers that make using 1Password on your computer remarkably easy.
It has simplified a previously annoying part of my digital life, while also making it more secure.
For those concerned about the security of storing your information online Spider Oak is a service worth considering. Their zero-knowledge policy ensure that—by design—they cannot access the data you store on their servers. Your data is encrypted on your computer and then sent to their servers (they don’t have access to your private key).
Benefits:
Securely store your data online, and have it sync between computers.
Allows you to select which folders to backup/sync.
Less expensive then Dropbox. Really great student rates.
Allows over 100 GB. This allows me to use it as an offsite backup for all my files (except video).
Downsides:
Not as user friendly. UI needs some work.
Sharing options well behind Dropbox.
No apps/services integrate with it.
Upload process seems slower, though I haven’t actually tested this.
Given the downsides, I use Spider Oak for backup and sync exclusively while also using a free 2 GB Dropbox account to take advantage of all it’s awesomeness.
From what I understand, this isn’t actually the case. When Google Drive was first release there was a lot of buzz about it’s terms, but this comparison with the terms of other similar services shows that there isn’t much difference between any of the major online backup/sync service providers.
For those who like using native email applications (like Apple Mail etc.) but are frustrated that they don’t integrate well with Gmail, Sparrow for Mac and iPhone (an iPad version is currently in development) is something you should definitely check out (they have a Lite version on both platforms). Sparrow provides the best Gmail experience in a native app I have found. The UI is very clean and well thought out. Another nice touch is it’s Facebook and Gravatar integration (for contact pictures) and Dropbox integration.
All in all, it’s a pleasure to use.
Gmail provides many “non-stnadard” features like labels, starred, priority inbox, conversation threads, all mail etcetera, that aren’t part of the IMAP standard. That’s what I mean by “Gmail experience.”
any MUA (mail user agent) can work with it and provide any experience it wants.
This may be true, but Sparrow is the only client I have found that provides the experience I want: Gmail in a native app.
In my experience some clients do some things well (e.g. Mail seems to have conversation threads working really well) and there are tricks to getting other feature to work (like create a smart folder that looks for all flagged messages, which would be the equivalent of Starred). However, with Sparrow you just provide your Gmail credentials and everything just works.
I think you’ll find this post relevant: A History of Bayes’ Theorem
Very true.
You may find this thread on the Dropbox forums interesting.
With Dropbox’s announcement of new plans and pricing, two of the benefits I listed above for SpiderOak are no longer true. Pricing is now equal (not considering SpiderOak’s student rates) and Dropbox has introduced 200 GB and 500 GB plans.
Additionally using symlinks one can add any folder to their Dropbox (note I’ve done this on OS X, I can’t speak to whether this is possible on Windows).
That leaves SpiderOak with it’s security benefits. However as this thread from the Dropbox forum details, there are many solution to this problem, one possibly coming from Dropbox itself!
As such, I’ve made the switch back to Dropbox.
It would be nice to have a transcript of the vows as well, some of them were really good.
This was my favourite:
Do you vow to reveal all your concerns about your relationship—as they appear to you—despite all embarrassment and fear; so that if the other stays silent you may trust that there is nothing to be said.
- 25 Jul 2012 18:29 UTC; 4 points) 's comment on Group rationality diary, 7/23/12 by (
Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set
I am inclined to agree with your first request about not rewarding reporting like this with increased page rank. As such I won’t re-add the link.
However, I’m having trouble understanding why a discussion about a portrayal of LW in the media isn’t something worth discussing here.
Done.
It would be nice if you removed yours now since you aren’t able to use the attribute in your comment.
[Applications Closed] The Singularity Institute is hiring remote LaTeX editors
Sorry, I don’t understand the question. Could you elaborate?
Each employee has a timesheet (on Google Docs) where they report their hours along with a description of what they spent those hours on. This doesn’t allow for fine grained analysis of how effective each worker is (or if they are embellishing slightly), but it’s enough to ensure that any substantial miss reporting does not occur.
I’m not sure how things have been done in the past, but as of last week I’m in charge of the document production team and have access to editors timesheets.
I’d also suggest that you are overestimating the variance of speed (holding quality of the product constant) of workers who stick with the team.
Finally, tasks aren’t always as clearcut as I think you imagine (not every task is, go convert this document and get it back to me) so that would complicate paying in a non-hourly fashion. Additionally clear cut tasks might vary in difficulty—converting one document might be easier then another—which means assigning a dollar figure isn’t a trivial task and comes with it’s own costs.
Regardless, If you’d like to volunteer, I’d be happy to see if we can work something out.
Send me a PM if you’re interested.
I didn’t have a specific time in mind, but I’d like to avoid turn over as much as possible. As such individuals who are likely to stick around for longer are preferred.
Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it’s the rare heroic act that can be accomplished without ever confronting reality.
I think the last sentence here is a big leap. Why is this a more plausible explanation then the idea that aspiring rationalist simply find AI-risk and FAI compelling. Furthermore, since this community was founded by someone who is deeply interested in both topics, members who are attracted to the rationality side of this community get a lot of exposure to the AI-risk side. As such, if we accept the premiss that AI-risk is a topic that aspiring rationalists are more likely to find interesting than a random member of the general public, then it’s not surprising that many end up thinking/caring about it after being exposed to this community.
You seem to attempt to justify this last sentence of the quoted text with the following:
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
I would respond to this by saying that thinking/caring about AI-risk ≠ working on AI-risk. I imagine there are also lots of people who think about the risks of asteroid impacts, but aren’t working on solving them, and wouldn’t claim they are. Also, this paragraph could be interpreted a saying that, people who claim to be doing work on AI-risk (e.g., SI) aren’t actually doing any work. It would be one thing to claim the work is misdirected, but to claim they aren’t working hard (to me) seems misinformed or disingenuous.
Which then leads into the following:
Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn’t even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
I think a more accurate characterization of SI’s stance would be that there are lots of important philosophical and mathematical problems that if solved will increase the likely hood of a positive Singularity, and that those doing what you call the “gritty engineering” haven’t properly considered the risks. Your statement seems to trivialize this work, and you state Holden’s criticism as evidence. What specifically in this “debate”—including the responses from SI—lead you to believe that SI’s approach is “withdrawn from reality.”
That seems a little harsh for a comment on a LWers first post attempt.