Well written post that will hopefully stir up some good discussion :)
My impression is that LW/EA people prefer to avoid conflict, and when conflict is necessary don’t want to use misleading arguments/tactics (with BS regulations seen as such).
Well written post that will hopefully stir up some good discussion :)
My impression is that LW/EA people prefer to avoid conflict, and when conflict is necessary don’t want to use misleading arguments/tactics (with BS regulations seen as such).
I agree I’ve felt something similar when having kids. I’d also read the relevant Paul Graham bit, and it wasn’t really quite as sudden or dramatic for me. But it has had a noticeable effect long term. I’d previously been okay with kids, though I didn’t especially seek out their company or anything. Now it’s more fun playing with them, even apart from my own children. No idea how it compares to others, including my parents.
Love this! Do consider citing the fictional source in a spoiler formatted section (ctrl+f for spoiler in https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq)
Also small error “from the insight” → “from the inside”
The most similar analysis tool I’m aware of is called an activation atlas (https://distill.pub/2019/activation-atlas/), though I’ve only seen it applied to visual networks. Would love to see it used on language models!
As it is now, this post seems like it would fit in better on hacker new, rather than lesswrong. I don’t see how it addresses questions of developing or applying human rationality, broadly interpreted. It could be edited to talk more about how this is applying more general principles of effective thinking, but I don’t really see that here right now. Hence my downvote for the time being.
Came here to post something along these lines. One very extensive commentary with reasons for this is in https://twitter.com/kamilkazani/status/1497993363076915204 (warning: long thread). Will summarize when I can get to laptop later tonight, or other people are welcome to do it.
Have you considered lasik much? I got it about a decade ago and have generally been super happy with the results. Now I just wear sunglasses when I expect to benefit from them and that works a lot better than photochromatic glasses ever did for me.
The main real downside has been slight halos around bright lights in the dark, but this is mostly something you get used to within a few months. Nowadays I only noticed it when stargazing.
This seems like something that would be better done as a Google form. That would make it easier for people to correlate questions + answers (especially on mobile) and it can be less stressful to answer questions when the answers are going to be kept private.
How is it that authors get reclassified as “harmful, as happened to Wright and Stross”? Do you mean that later works become less helpful? How would earlier works go bad?
Given that you didn’t actually paste in the criteria emailed to Alcor, it’s hard to tell how much of a departure the revision you pasted is from it. Maybe add that in for clarity?
My impression of Alcor (and CI, who I used to be signed up with before) is that they’re a very scrappy/resource-limited organization, and thus that they have to stringently prioritize where to expend time and effort. I wish it weren’t so, but that seems to be how it is. In addition, they have a lot of unfortunate first-hand experience with legal issues arising during cryopreservation due to family intervention, which I suspect is influencing their proposed wording.
I would urge you to not ascribe to malice or incompetence what can be explained by time limitations and different priors. My suspicion is that if you explain where you’re coming from and why you don’t like their proposed wording (and maybe ask why they wanted to change some of the specific things you were suggesting) then they would be able to give you a more helpful response.
Given other sketchy things I’ve read about them (there is plenty of debate on this site and elsewhere calling them out for bad behavior)
I don’t follow things too closely but would be interested in what you’re referring to, if you could provide any links.
Downvoted for lack of standard punctuation, capitalization, etc., which makes the post unnecessarily hard to read.
Do you mean these to apply at the level of the federal government? At the level of that + a majority of states? Majority of states weighted by population? All states?
Thanks! Reversed :)
Downvoted for burying the lede. I assumed from the buildup this was something other than what it was, e.g. how a model that contains more useful information can still be bad, e.g. if you run out of resources for efficiently interacting with it or something. But I had to read to the end of the second section to find out I was wrong.
Came here to suggest exactly this, based on just the title of the question. https://qntm.org/structure has some similar themes as well.
Re: looking at the relationship between neuroscience and AI: lots of researchers have found that modern deep neural networks actually do quite a good job of predicting brain activation (e.g. fmri) data, suggesting that they are finding some similar abstractions.
Examples: https://www.science.org/doi/10.1126/sciadv.abe7547 https://www.nature.com/articles/s42003-019-0438-y https://cbmm.mit.edu/publications/task-optimized-neural-network-replicates-human-auditory-behavior-predicts-brain
I’ll make sure to run it when I get to a laptop. But if you ever get a chance to set the distill.pub article up to run on heroku or something, that’ll increase how accessible this is by an order of magnitude.
Sounds intriguing! You have a GitHub link? :)
If you do get some good results out of talking with people, I’d recommend trying to talk to people about the topics you’re interested in via some chat system and then go back and extract out useful/interesting bits that were discussed into a more durable journal. I’d have recommended IRC in the distant past, but nowadays it seems like Discord is the more modern version where this kind of conversation could be found. E.g. there’s a slatestarcodex discord at https://discord.com/invite/RTKtdut
YMMV and I haven’t personally tried this tactic :)