I still haven’t read most of the sequences and don’t intend to read HPMOR
That’s fine, that’s what projectlawful is for. It’s meant to be the fun thing that you can do instead of looking at TV shows and social media. I like reading it after waking up and before going to bed.
It’s ideal in a lot of ways, because it’s explicitly designed to have you learn rationality by habit/repetition without any deliberate repetitive effort e.g. taking notes, which is necessary in order to actually get good at turning rationality into extreme competence at life.
The EY self-insert character, Keltham, is much more humble and is genuinely interested in the world and people around him (many of whom are “smarter” than him or vigorously intend to surpass him). He’s not preachy, he’s an economist; and most of the rationality lessons are just him saying how things are in an alternate timeline (dath ilan), not insisting that they ought to be his way.
I definitely agree that it’s a good idea to find ways to use EY’s writings to get ahead of the curve and find new opportunities; it would save everyone a lot of time and labor to just implement the original works themselves instead of usual rewriting them in your own words and taking credit for it to advance your status. What you’ve said about summarization makes sense, but I’ve tried that and it’s a lot harder than it looks; getting rid of the lists of examples and parables makes it harder to digest the content properly. This is an extreme case, it turned the best 25 sequences into notes basically (a great alternative to rereading all 25 since you can do it every morning, but not ideal for the first read).
Maybe such a contest could also require the entrants to describe generally-valuable effective strategies to condense EY’s writings?
If you do AI policy, this is a great way to quickly skill up at explaining alignment, and also quickly skilling up on AI itself.
For the record, anyone buying an elatomeric mask should get at least a P100, not anything with a “95” on it. A lot of people made that mistake. The one you suggested seems even better.
There are better people than me to ask about this, such as Steph Guerra. However, it’s possible that, in the US, N95 masks were treated like a valuable commodity early in the pandemic and the USG bent over backwards to prevent average people from buying and hoarding them. I doubt that there were 200 billion KN95 masks in China at any point in 2020 but I might be wrong about that.
At some point, by summer 2022 in the US, emphasis shifted so that PPE was considered second priority to vaccination. That’s where the US national security community seems to be right now, but Steph would be a much better person to ask about this, and all sorts of things related to PPE.
In between, there is a third thing I know of, related to contact tracing systems in China. I’m not willing to share info about this kind of thing on a public forum, but if you’re interested you can DM me with more info about you, and once I have a better idea of who you are and why you might be interested, then I’d be happy to share.
I just want to clarify that referencing Vladimir Putin works very well for explaining x-risk/vulnerable world hypothesis/inadequate equilibria to most people. I have done it and it is often very helpful.
In DC, talking like that is hazardous to one’s career, and I made that mistake a couple times in my first year in DC. People in DC in general should worry about talking about things that seem popular on the internet; I’ve had people decide they didn’t want to talk to me (blank stare) after I mentioned Big Data, because to them it was just a buzzword used by people who don’t know what they’re talking about. That’s an extreme case and the best rule of thumb is to avoid talking about politicians or political parties, especially strong emotions regarding political parties or politicians.
given that we live on a planet that includes climate change, over ten thousand nuclear weapons, and Vladimir Putin.
Affirming the popular belief that Putin is somehow equivalent to “ten thousand nuclear weapons” conveys naivety about geopolitics, the kind that will be noticed by any reader familiar with geopolitics or government or nuclear weapons. Joking about that also conveys naivety, albeit a somewhat different kind. People who work in and around that sector are not supposed to be influenced by anything that looks remotely like propaganda, regardless of what the source seems to be and what side seems like they would be the one pushing it. At minimum, mudslinging against famous world leaders will be seen as unprofessionally getting involved with systems and forces that the author does not understand.
Either way, it indicates to the reader that either this is meant exclusively for people who are naive about very important information about how the world works, or it indicates that both the author and the readers are naive, in a way that is taken extremely seriously by extremely influential people. If you only want to appeal to random programmers etc. then I don’t see any issue with it, but people involved with corporate or government decisions are probably as worth appealing to as well.
What is some of the most time-efficient ways to get a ton of accurate info about AI safety policy, via the internet?
(definitely a dumb question but it fits the criteria and might potentially have a good answer)
An environmentalist gets lunch: ’Why being an effective environmentalist can often feel like being a bad one.”The reason given here is that the things that feel environmentally friendly, and the things advocated for by environmentalists, tend to not be the things that help.
An environmentalist gets lunch: ’Why being an effective environmentalist can often feel like being a bad one.”
The reason given here is that the things that feel environmentally friendly, and the things advocated for by environmentalists, tend to not be the things that help.
Effective Environmentalism is maximizing how good people feel about being environmentalists. Anything else would be, like, dry fog or some other impossible thing.
I think it’s pretty clear that more of us should pay attention to generators of generators of disagreement with AI alignment, the generation process itself is worth consideration. It’s really rare to see solid arguments against AI safety such as these, as opposed to total disinterest or vague thought processes, and the fact that it’s been that way for 10-20 years is no guarantee that it will stay that way for even one more year.
I wouldn’t be surprised if training data becomes hard to come by for any reason, including dilution.
writing things down: helpful, has time and depth costs, unclear how useful it is for learning new things
I benefited a lot from re-practicing my handwriting, so that I could take notes as I read the sequences for the first time (which you can only do once).
Taking notes via handwriting is absolutely necessary to learn new things. In school they taught us that we lose 50% if we don’t take notes but we ignored that along with all the other lame propaganda that it was mixed in with, even though it’s very, very true. Writing to paper is like a computer writing to memory instead of RAM.
And if you’re in the habit of trying to think about things worth thinking about, then that means you’ll tend to come across things worth writing down.
If exercising arm and core muscles strengthens the body, then exercising hand/wrist muscles (while practicing handwriting) strengthens the mind.
Thank you for clarifying this.
I would have been very misinformed, in a very damaging way, in the work I do every day, if you hadn’t refuted some of the erroneous claims made in this post and in that comment.
On balance this post still would have been very helpful for my analyst work, but even more so thanks to you clearing this up.
This is a very high expected-value topic to cover- and it’s also interesting how modern people groan in annoyance whenever they hear about the concept of being drowned in entertainment. I remember around 2014 was when I first encountered the concept of instant gratification desensitization- going beyond pavlovian training, and just making the brain refuse to expect to spend time thinking about things in general. Maybe that means that 2014 was when people started to start getting tired hearing about it; I’d consider that anecdata.
Just because something is “relevant” and “oft talked about” by large numbers of shallow people and shallow info sources, doesn’t mean that it can’t break your brain.
I’m keeping this review around for open source intelligence use, since it has a lot of stuff. But, I also think this topic is so important that it’s worth being concise when covering it, since the EV of this area is so high that it’s worth maximizing communication efficiency. There’s also a lot of specialists who analyze things like social media effect on the brain, and are completely unwilling or unable to talk about some of their knowledge, which makes it even more important to make communication more efficient in the limited ways it’s possible to do so.
From a policy angle, this is a great idea and highly implementable. This is for four reasons:
It fits well into the typical policymaker’s impulse to contain, control, and keep things simple.
It allows progress to continue, albeit in an environment appropriate to the risk.
Spending additional money on containment strategies does not risk particular AI lab or country ending up being “behind on AI” in the near-term.
Policymakers are more comfortable starting years ahead of time working on simple concepts In this case. building the cage before you try to handle the animal.
This is an amazing idea
Right now might be a very bad time to do it.
Measures should be taken to make sure that, if it doesn’t happen now, it will happen in a few months. Such as having half a dozen people simultaneously set a notification for december, and also tell themselves “it’s december, I’m going to schedule a rogue meetup” in case the notification itself fails to fire or make itself reasonably noticeable.
5: Evolution could not have succeeded anyways
Evolution had to succeed. In order for evolution to be noticed and/or modeled by anything, the patterns of neurons had to align perfectly, even if there was a one-in-a-trillion chance of something like neurons randomly forming the correct general intelligence, anywhere, ever. The fact that we came from neuron brute forcing doesn’t tell us that much about whether neuron brute forcing can create general intelligence.
Animals and insects aren’t evidence at all; given that intelligence evolved, there would be plenty of offshoots.
This looks like it’s worth a whole lot of funding
Reading this has been an absolute fever dream. That’s not something that happens when it’s mostly or totally inaccurate, like various clickbait articles from major news outlets covering AI safety.
One thing it seems to get wrong is the typical libertarian impulse to overestimate the sovereignty of major tech companies. In the business world, they are clearly the big fish, but in the international world, it’s pretty clear that their cybersecurity departments are heavily dependent on logistical/counterintelligence support from various military and intelligence agencies. Corporations might be fine at honeypots, but they aren’t well known for being good at procuring agents willing to spend years risking their lives by operating behind enemy lines.
There are similar and even stronger counterparts in Chinese tech companies. Both sides of the Pacific have a pretty centralized and consistent obsession with minimizing the risk of being weaker on AI, starting 2018 at the latest (see page 10).
AI engineers seem to be a particularly sensitive area to me, they’re taken very seriously as a key strategic resource.