See https://joshuafox.com for more info.
JoshuaFox(Joshua Fox)
I have been donating $100 monthly on a subscription payment and will continue to do so.
Easier on the cash-flow than a lump donation. More fuzzies per year, too.
I filled out the survey. Thanks for doing this!
The digit ratio instructions are underspecified.
″....from the middle of the bottom crease”. It’s hard to tell what the “middle” means meaningfully enough to produce any sort of measurement, even to the nearest centimeter; certainly it is impossible to measure “to the nearest hundredth of a centimeter.”
The instructions don’t mention the left hand, and don’t mention the step of scanning/copying your hand. We can easily interpolate, but the instructions are structured as if they are meant to be followed formally, so may as well make them precise.
Too late now, but an interesting question would be: Have you volunteered for MIRI, CfAR, or the broader mission of rationality or AI-risk? (The question would have to be specified more precisely than that.)
There may be some who do not possess deep and comprehensive knowledge of Ancient Web Trivia from Before the Dawn of Google. For them, here’s the Evil Overlord List .
If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?
Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it’s not good enough to say that we’re really rational, scientific, altruist, utilitarian, etc, in contrast to those people—they thought the same.)
So, how might we find that all these ideas are massively wrong?
I organized that, so let me say that:
That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
I have conversed with him a few times, as follows:
I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
In 2012, he explained Acausal Trade to me, and that was the seed of this post. That discussion was quite sensible and I thank him for that.
A few years later, I invited him to speak at LessWrong Israel. At that time I thought him a mad genius—truly both. His talk was verging on incoherence, with flashes of apparent insight.
Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze his techniques as Dark Arts. Apparently I am mature enough to shrug off such techniques.
His talk at my online meetup was even less coherent than any before, with multiple offensive elements. Indeed, I believe it was a mistake to have him on.
If I have offended anyone, I apologize, though I believe that letting someone speak is generally not something to be afraid of. But I wouldn’t invite him again.
- 8 Mar 2023 11:23 UTC; 24 points) 's comment on Abuse in LessWrong and rationalist communities in Bloomberg News by (EA Forum;
- Decentralized Exclusion by 13 Mar 2023 15:50 UTC; 23 points) (
Subtopics, so that FAI, personal efficency, and effective altruism, for example, could be tracked separately by people who are interested in each.
Different functionality for different types of posts: meetup planning, casual discussion, quotes repositories, welcome threads, advice repositories, etc. You might also add a method for adding and voting on excellent articles from outside LW. As-is, all functions are handled by the same post/nested-thread format, which is not necessarily the best suited for each one.
Better layout design. It’s best to get a design expert on this, but my sense is that the front page, and also other pages, are not laid out in a clear and appealing way.
Social-networking integration. People use Facebook, blogs, etc. to connect nowadays, so make it easy for LW members to do this. E.g., users could optionally add links to FB and other social networks in their profiles, and you could make it easy to share/like/+1 a post.
Rework the Discussion/Main distinction. As-is, this is very unclear. Best as I can tell, those who are supposed to post to Main know it, and everyone else is supposed to post to Discussion, after which the mysterious Lords of LessWrong promote a few posts. Is that how it is? In any case, a better way can be found.
A question that has been asked before, and so may be stupid: What concrete examples are there of gains from CfAR training (or self-study based on LessWrong)? These would have to come in the form of very specific examples, preferably quantitative.
E.g. “I was $100,000 in debt and unemployed for 2 years, and now I have employment earning twice what I ever have before and am out of debt.”
“I never had a relationship that lasted more than 2 months, but now am happily married.”
“My grade point average went up from 2.2 to 3.8”
“After struggling to diet and exercise for years, I finally got on track and am now in the best shape of my life.”
etc.
LessWrong-Tel Aviv members Dan Armak, Adam Mesha, Yonatan Cale, and I contributed to MIRI/CfAR in honor of Edan Maor’s marriage to Sami Wexsel.
We encourage all LessWrongers to consider more donation in honor of friends’ special events. It’s a great way to get triple fuzzies: You, the honoree, and the wider community get to feel good about it.
- 2 Nov 2014 12:12 UTC; 0 points) 's comment on November 2014 Media Thread by (
Once my workplace had a party/fair allegedly to raise money for some charity.
I was slightly miffed to the low util to fuzzies ratio, and to the company’s taking the credit for the employee’s fundraising, with no corporate matching.
So, when I was asked for money at the event (one-on-one, not in front of everyone), I wrote a check to my favorite charity, for about the same total as the entire fundraiser, right in front of the person asking for the money; I explained myself politely and the requester (I think) took it as an impressive act of charity rather than as asociality. The check was in addition to my usual monthly donation.
which can kill a human without an explicit command from a human operator (“Human-Out-Of-The-Loop” weapons)
Like pit-traps?
A certification system to replace high-school and college.
With the explosion in independent study on all education levels, certification is the main missing piece. One solution is tests. For example, Pearson’s is offering this service to Udacity students. However, certification-by-testing has had a hard time getting prestige. In the high-status parts of the software industry, getting Java/Microsoft/etc. certification is a slight negative on your job value—i.e., one is expected to countersignal.
So, we need a certification system that succeeds at serving as a signal.
What successful examples can we find? The actuarial industry has a system of advancement with ten exams. There is no requirement to get a certain degree to take them. The top level is considered an intellectual achievement roughly equivalent to a PhD.
Perhaps the certification we’re offering should test useless skills which require a long time to acquire, proving that one is not just smart but hard-working. Compare Latin in earlier periods, and the software language Scheme (a language used mostly for theory, not for product development) in the software industry today.
The usual trappings of signaling, like association with prestigious people, would be an essential part of the marketing.
Eliezer addressed this in part with his “Death Spiral” essay, but there are some features to LW/SI that are strongly correlated with cultishness, other than the ones that Eliezer mentioned such as fanaticism and following the leader:
Having a house where core members live together.
Asking followers to completely adjust their thinking processes to include new essential concepts, terminologies, and so on to the lowest level of understanding reality.
Claiming that only if you carry out said mental adjustment can you really understand the most important parts of the organization’s philosophy.
Asking for money for a charity, particularly one which does not quite have the conventional goals of a charity, and claiming that one should really be donating a much larger percentage of one’s income than most people donate to charity.
Presenting an apocalyptic scenario including extreme bad and good possibilities, and claiming to be the best positioned to deal with it.
[Added] Demand you leave any (other) religion.
Sorry if this seems over-the-top. I support SI. These points have been mentioned, but has anyone suggested how to deal with them? Simply ignoring the problem does not seem to be the solution; nor does loudly denying the charges; nor changing one’s approach just for appearances.
- 16 Mar 2012 11:19 UTC; 6 points) 's comment on Cult impressions of Less Wrong/Singularity Institute by (
it’s a bit of a shame that people seem willing to do whatever is most important… except whenever it isn’t inherently fun or prestigious!
Back in 2009, SIAI needed spam filtering. I took it on and manually filtered spam and also installed the Akismet spam filter, even though my skill level would have allowed me to do more sophisticated tasks. But that’s what was needed.
I hereby claim retroactive social status for not insisting on only doing high status tasks :-)
At the time of Hofstadter’s Singularity Summit talk , I wondered why he wasn’t “getting with the program”, and it became clear he was a mysterian: He believed—without being a dualist -- that some things, like the mind, are ultimately, basically, essentially, impossible to understand or describe.
This 2023 interview shows that the new generation of AI has done more than chagne his mind about the potential of AI: it has struck at the core of his mysterianism
the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop.
How can I keep warm when going outside on a blustery fall day? Wear clothing.
How can I eat without spending all my time hunting? Buy food from other people who specialize in that.
How can I retain key thoughts more precisely than by mere memorization? Write them down.
Social proof. Very useful.
Musk’s position on AI risk is useful because he is contributing his social status and money to the cause.
However, other than being smart, he has no special qualifications in the subject—he got his ideas from other people.
So, his opinion should not update our beliefs very much.
I don’t know about “low” IQ, but plenty of people who don’t necessarily have genius IQ have very strong instrumental rationality.
Things like stable family life, network of friends, community, conservative approach to money, religion and charity with a social component, work ethic, temperate living, exercise, etc.
Doing these things may correlate with IQ on the low end; but it has little to do with the genius-level IQ which is so common at LW.
I learned how to crank out patents. My thinking, over the years, shifted from “Wow, I can really be an inventor,” to “Wow, I can Munchkin a ridiculously misconfigured system” and beyond that to “This is really awful.”
My blog post: “The evil engineer’s guide to patents”.
Since Munchkining means following the letter of the rules, while bypassing the unspoken rules, we should consider how often it is accompanied by moral dissonance.