See https://joshuafox.com for more info.
JoshuaFox(Joshua Fox)
The Singularity Wars
I have been donating $100 monthly on a subscription payment and will continue to do so.
Easier on the cash-flow than a lump donation. More fuzzies per year, too.
I filled out the survey. Thanks for doing this!
The digit ratio instructions are underspecified.
″....from the middle of the bottom crease”. It’s hard to tell what the “middle” means meaningfully enough to produce any sort of measurement, even to the nearest centimeter; certainly it is impossible to measure “to the nearest hundredth of a centimeter.”
The instructions don’t mention the left hand, and don’t mention the step of scanning/copying your hand. We can easily interpolate, but the instructions are structured as if they are meant to be followed formally, so may as well make them precise.
Business Networking through LessWrong
What is the most effective way to donate to AGI XRisk mitigation?
Too late now, but an interesting question would be: Have you volunteered for MIRI, CfAR, or the broader mission of rationality or AI-risk? (The question would have to be specified more precisely than that.)
Evaluating the feasibility of SI’s plan
There may be some who do not possess deep and comprehensive knowledge of Ancient Web Trivia from Before the Dawn of Google. For them, here’s the Evil Overlord List .
If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?
Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it’s not good enough to say that we’re really rational, scientific, altruist, utilitarian, etc, in contrast to those people—they thought the same.)
So, how might we find that all these ideas are massively wrong?
I organized that, so let me say that:
That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
I have conversed with him a few times, as follows:
I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
In 2012, he explained Acausal Trade to me, and that was the seed of this post. That discussion was quite sensible and I thank him for that.
A few years later, I invited him to speak at LessWrong Israel. At that time I thought him a mad genius—truly both. His talk was verging on incoherence, with flashes of apparent insight.
Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze his techniques as Dark Arts. Apparently I am mature enough to shrug off such techniques.
His talk at my online meetup was even less coherent than any before, with multiple offensive elements. Indeed, I believe it was a mistake to have him on.
If I have offended anyone, I apologize, though I believe that letting someone speak is generally not something to be afraid of. But I wouldn’t invite him again.
- 8 Mar 2023 11:23 UTC; 24 points) 's comment on Abuse in LessWrong and rationalist communities in Bloomberg News by (EA Forum;
- Decentralized Exclusion by 13 Mar 2023 15:50 UTC; 23 points) (
Subtopics, so that FAI, personal efficency, and effective altruism, for example, could be tracked separately by people who are interested in each.
Different functionality for different types of posts: meetup planning, casual discussion, quotes repositories, welcome threads, advice repositories, etc. You might also add a method for adding and voting on excellent articles from outside LW. As-is, all functions are handled by the same post/nested-thread format, which is not necessarily the best suited for each one.
Better layout design. It’s best to get a design expert on this, but my sense is that the front page, and also other pages, are not laid out in a clear and appealing way.
Social-networking integration. People use Facebook, blogs, etc. to connect nowadays, so make it easy for LW members to do this. E.g., users could optionally add links to FB and other social networks in their profiles, and you could make it easy to share/like/+1 a post.
Rework the Discussion/Main distinction. As-is, this is very unclear. Best as I can tell, those who are supposed to post to Main know it, and everyone else is supposed to post to Discussion, after which the mysterious Lords of LessWrong promote a few posts. Is that how it is? In any case, a better way can be found.
A question that has been asked before, and so may be stupid: What concrete examples are there of gains from CfAR training (or self-study based on LessWrong)? These would have to come in the form of very specific examples, preferably quantitative.
E.g. “I was $100,000 in debt and unemployed for 2 years, and now I have employment earning twice what I ever have before and am out of debt.”
“I never had a relationship that lasted more than 2 months, but now am happily married.”
“My grade point average went up from 2.2 to 3.8”
“After struggling to diet and exercise for years, I finally got on track and am now in the best shape of my life.”
etc.
DeepMind team on specification gaming
Request for Comments on Online LessWrong/SSC Meetup—Rump Session
LessWrong-Tel Aviv members Dan Armak, Adam Mesha, Yonatan Cale, and I contributed to MIRI/CfAR in honor of Edan Maor’s marriage to Sami Wexsel.
We encourage all LessWrongers to consider more donation in honor of friends’ special events. It’s a great way to get triple fuzzies: You, the honoree, and the wider community get to feel good about it.
- 2 Nov 2014 12:12 UTC; 0 points) 's comment on November 2014 Media Thread by (
Once my workplace had a party/fair allegedly to raise money for some charity.
I was slightly miffed to the low util to fuzzies ratio, and to the company’s taking the credit for the employee’s fundraising, with no corporate matching.
So, when I was asked for money at the event (one-on-one, not in front of everyone), I wrote a check to my favorite charity, for about the same total as the entire fundraiser, right in front of the person asking for the money; I explained myself politely and the requester (I think) took it as an impressive act of charity rather than as asociality. The check was in addition to my usual monthly donation.
Testing Hanson’s hypothesis about the uselessness of health care.
which can kill a human without an explicit command from a human operator (“Human-Out-Of-The-Loop” weapons)
Like pit-traps?
I learned how to crank out patents. My thinking, over the years, shifted from “Wow, I can really be an inventor,” to “Wow, I can Munchkin a ridiculously misconfigured system” and beyond that to “This is really awful.”
My blog post: “The evil engineer’s guide to patents”.
Since Munchkining means following the letter of the rules, while bypassing the unspoken rules, we should consider how often it is accompanied by moral dissonance.