Donated. Go CFAR!
Larks
On the one hand, I absolutly abhore SIAI. On the other hand, I’d love to turn my money into karma...
/joke
$100
- 26 Feb 2011 15:27 UTC; 2 points) 's comment on Making money with Bitcoin? by (
After having written an annual review of AI safety organisations for six years, I intend to stop this year. I’m sharing this in case someone else wanted to in my stead.
Reasons
It is very time consuming and I am busy.
I have a lot of conflicts of interests now.
The space is much better funded by large donors than when I started. As a small donor, it seems like you either donate to:
A large org that OP/FTX/etc. support, in which case funging is ~ total and you can probably just support any.
A large org than OP/FTX/etc. reject in which case there is a high chance you are wrong.
A small org OP/FTX/etc. haven’t heard of, in which case I probably can’t help you either.
Part of my motivation was to ensure I stayed involved in the community but this is not at threat now.
Hopefully it was helpful to people over the years. If you have any questions feel free to reach out.
- 1 Mar 2023 4:33 UTC; 5 points) 's comment on 2021 AI Alignment Literature Review and Charity Comparison by (
There’s a question about other social movements people might associate themselves with. How was the list of suggestions created? At present, the list is very left-wing:
Animal rights
Environmentalist
Feminist
LGBTQ
Rationalist/LessWrong
Transhumanist
Skeptic/atheist
Other:
Ordinarily this would only be a small problem, but then you ask people about their political views after you’ve primed them with left-wing examples.
universal death is the mind killer
sounds legit to me!
It seems both Harry and Dumbledore are missing one of the big payoffs of Harry saving Hermione: making it very attractive to become his freind. There’s no explicit enemy around at the moment, so he can’t rally minions like Dumbledore did by using the threat of Voldermort; love might be his best option.
Survey completed! Also, everyone, please cooperate!
Yvain, will you reveal who won the money? Whether they cooperated or defected?
I finally submitted my Alcor application form.
When we say ‘rationality’, we mean instrumental rationality; getting what you want. Elsewhere, we also refer to epistemic rationality, which is believing true things. In neither case do we say anything about what you should want.
It might be a good thing to care about cows, but it’s not rationality as we understand the word. Good you bring this up though, as I can easily imagine others being confused.
See also What Do We Mean by Rationality
Have a second karma bubble, that only sums the upvotes and downvotes you’ve given that person.
My initial reaction was
“Unfortunately I think this is too far away to have much impact on NPVs now. Say it takes 5 years to develop the technology, and 20 years for early adopters to mature and enter the workforce. Suppose the rate of RGDP growth increases from 2.5% → 10%, and we use a discount rate of 8%. I don’t think you’ll get much movement in NPV”
But then I actually worked it out in excel and the NPV triples. So thank you for the good suggestion!
I also filled in the survey! Hurrah for laboureous data gathering.
Publish draft questions in advance, so we can spot issues before the survey goes live.
I hope you’ll all forgive the pedantry, but it seems clearly laying out the argument might be the best way to avoid a flame war that isn’t making anyone look good, or encouraging rationality particularly. If this post is downvoted, I’d suggest we leave the topic.
NB: I don’t know enough of the history to judge who is more/less right/wrong between Alicorn and SilasBarta, and even if I could, probably wouldn’t say. I solely intend to attempt to clarify what SilasBarta meant.
Summary of what I take to be SilasBarta’s argument:
SilasBarta replying to Alicorn causes Alicorn psychological damage because Alicorn dislikes SilasBarta.
If Alicorn did not dislike SilasBarta, Alicorn would not incur psychological damage when SilasBarta replied to her.
There are advantages to Alicorn of being able to freely discuss with SilasBarta.
If Alicorn did not dislike SilasBarta, these advantages would outweigh the costs (e.g. time taken reading his replies).
Alicorn doesn’t get any benefit from disliking SilasBarta.
Hence it would be beneficial for Alicorn to cease disliking SilasBarta.
Alicorn is (as reasonable an approximation as a human fairly expect to be) rational.
Hence if something would be beneficial for Alicorn to do, she would do it.
Hence if Alicorn could stop disliking SilasBarta, she would do so/would have done so.
Alicorn has not ceased disliking SilasBarta, and does not appear to be doing so.
11) Hence Alicorn does not have a general method for stopping disliking people.
Possible counter-arguments:
Alicorn’s method relies on focusing on positive aspects; SilasBarta has no/too few positive aspects for this to work.
SilasBarta’s comments have no interest to Alicorn.
Alicorn has better things to be doing with her time than building a good relationship with SilasBarta.
Alicorn thinks there are lower-hanging fruit than SilasBarta.
To start liking SilasBarta would signal that her threats weren’t credible.
Alicorn’s method has to be used before a deep dislike has set in.
SilasBarta is undermining her attempts by posting comments about her, which she finds upsetting. In this situation, containment (e.g. asking him not to reply to her) is better than cure (creating a positive relationship).
Alicorn rarely gets to see SilasBarta at what she would consider ‘his best’ – she is most aware of his posts about her, which she doesn’t enjoy.
Alicorn thinks SilasBarta is very rational, and thus attributes his acts to him, rather than his environment.
Edit: list formatting.
This deserves a top level post, at least in discussion. I assume MIRI just can’t afford to hire anyone to make LW posts. As such, I’ve just made a $2,000 donation, earmarked for just that purpose.*
*Not actually earmarked for silly things.
Currently between jobs; donated $100 anyway, as the world is not a story and will not wait for my montage to finish.
Science has moved away from considering memories to be simply long-term structural changes in the brain to seeing memories as the products of “continuous enzymatic activity” (Sacktor, 2007). Enzyme activity ceases after death, which could lead to memory destruction.
For instance, in a slightly unnerving study, Sacktor and colleagues taught mice to avoid the taste of saccharin before injecting them with a PKMzeta-blocking drug called ZIP into the insular cortex. PKM, an enzyme, has been associated with increasing receptors between synapses that fire together during memory recollection. Within hours, the mice forgot that saccharin made them nauseous and began guzzling it again. It seems blocking the activity of PKM destroys memories. Since PKM activity (like all enzyme activity) also happens to be blocked following death, a possible extension of this research is that the brain automatically “forgets” everything after death, so a simulation of your brain after death would not be very similar to you.
The Unreasonable Effectiveness of Astronomy
It seems like human astronomy is more effective than it has any right to be. Why?
First I’ll try to establish that there is a mystery to be solved. It might be surprising so see the words “effective” and “astronomy” together in the same sentence, but I claim that human beings have indeed made a non-negligible amount of astronomical progress. To cite one field that I’m especially familiar with, consider galaxies, where we went from having no concept of galaxies, to studies involving the milky way and other groups of light in the sky, to measuring their speed, location, age, and genesis, to the Einstein’s realizations that the flat universe and the Newtonian physics are both likely to be wrong or incomplete.
We might have expected that given we are products of evolution, the amount of our philosophical progress would be closer to zero. The reason for low expectations is that evolution is lazy and shortsighted. It couldn’t possibly have “known” that we’d eventually need stargazing abilities to escape the planet. What kind of survival or reproductive advantage could these abilities have offered our foraging or farming ancestors?
From the example of my webcam, we also know that there are eyes in the design space of visual sensors that could be considered highly sensitive, but are incapable of making out distant stars. For example, a weasel is, apparently, incapable of making out more than a dim blurr. Nor would it be able to tell it was missing much, or have any reason to build telescopes.
Why aren’t we more like CCTV in our ability to look at the stars? I have some ideas for possible answers, but I’m not sure how to tell which is the right one:
Astronomic ability is “almost” universal in eye space. Low-quality or pathologically horizontal visual receptors are an example of an atypical mind.
Evolution created stargazing ability as a side effect while selecting for the ability to see predators. This seems implausible; being able to see pretty lights in the sky would only serve to distract us.
Stargazing ability is rare and not likely to be produced by evolution. There’s no explanation for why we have it, other than dumb luck. This helps explain why there’s no sign of alien life yet; Stargazing is the great filter.
We’re living in an ancestor simulation, which can only be run by species with the ability to escape their home planet, necessitating stargazing powers.
As you can see, progress is pretty limited so far, but I think this is at least a useful line of inquiry, a small crack in the problem that’s worth trying to exploit. People used to wonder at the unreasonable effectiveness of philosophy, especially in probability, and I think such wondering eventually contributed to the idea of the philosophical universe if the world is made of philosophy, then it wouldn’t be surprising that philosophy is, to para-quote Wei Dai, “appropriate to the objects of reality”. I’m hoping that my question might eventually lead to a similar insight.
A study by the investment-research firm Strategas which was cited in The Economist and the Washington Post compared the 50 firms that spent the most on lobbying relative to their assets, and compared their financial performance against that of the S&P 500 in the stock market; the study concluded that spending on lobbying was a “spectacular investment” yielding “blistering” returns comparable to a high-flying hedge fund, even despite the financial downturn of the past few years.
I think I read this research while I was a Strategas client; if I’m remembering it correctly it was extremely poorly done. Short back test (just a few years), garden of forking paths, etc. Most sell-side research is not epistemically rigourous and Strategas is not one of the better firms. I would not put much weight on this research.
There is widespread agreement that a key ingredient in effective lobbying is money. This view is shared by players in the lobbying industry.
Well of course lobbyists would say they’re worth the money!
There’s not point being annoyed at nature, but a precommitment to revenge is useful.