Not sure what happened in between, but some diagrams are missing again
sanyer
The soundness of this advice depends a bit on what career path you want to pursue, though. If you want to do some lobbying or policy advocacy, it’s pretty difficult to “just get to work” if you don’t have the right network, skill set, and credentials. And working in that area without knowing what you’re doing can also be quite harmful.
It’s worth saying that applying for things can also yield some benefits. I definitely became a better writer through the various work tests I did when I was applying to lots of training programs. I also got some nice feedback (props to CLR especially!) and the experience helped me to better understand what different orgs & people are working on. I also got a clearer idea of my career aspirations.
This is assuming you get through the very first round and get to do some test tasks though...
Well yes, but I’m not sure if non-native speakers are in the “intended public”, since they operate in the US mostly
As a non-native English speaker, OpenPhil was sooo much easier to pronounce than Coefficient Giving. I’m sure this shouldn’t play a big part in the naming decision, but still...
What are some examples of illegible AI safety problems?
Regarding results of empirical evaluations of AI’s scheming-relevant capabilities, I think we could do even better than simple removal by replacing the real results with synthetic results that intentionally mislead the AIs about their own capabilities. So, if the model is good at scheming, we could mislead it by making it believe it is bad at it, and vice-versa. I think this could be quite feasible, since rather than coming up with fully synthetic data that may be easy to spot as synthetic, you only need to replace some specific results.
A link to the original article would be appreciated
How many people are working on test-time learning? How feasible do you think it is?
I see. Why do you have this impression that the default algorithms would do this? Genuinely asking, since I haven’t seen convincing evidence of this.
I don’t know, the obviously wrong things you see on the internet seems to differ a lot based on your recommendation algorithm. The strawmanny sjw takes you list are mostly absent from my algorithm. In contrast, I see LOTS of absurd right-wing takes in my feed.
The links to subsections in the table of contents seem to be broken.
It’s Galaxy A54.
I’m not sure how to share screenshots on mobile on LW 😅
The idea seems cool but the feed doesn’t work well on my phone. It cuts the sides of the text which makes things unreadable. (I have a Samsung)
President of European Commission expects human-level AI by 2026
Now, the EU itself needs some reforms badly, namely, as Draghi report suggests, relaxing the regulation, but there seems no political will to do that. At least, last time I’ve checked I have still seen those annoying “accept cookies” banners alive and kicking.
This is not true; there is a lot of political will for deregulation and simplification (see e.g. here). Everyone is talking about it in Brussels.
I assume the point about “accept cookies” banners was a joke, but just in case it wasn’t: it takes time for regulations to be changed, so the fact that we still see the “accept cookies” banners offers no evidence that the EU is not taking deregulation seriously (another question is, if getting rid of those banners or other GDPR rules would boost competitiveness; I suspect it won’t).
Also, IMO the most important reforms we need are not about regulation, but about harmonizing standards across the EU and creating a true single market.
I would expect higher competence in philosophy to reduce overcondidence, not increase it? The more you learn, the more you realize how much you don’t know
This LessWrong version of the post seems to cut the final section of the original post?
What exactly is worrying about AI developing a comprehensive picture of your life? (I can think of at least a couple problems, e.g. privacy, but I’m curious how you think about it)
They show up to me now as well. Not sure what happened yesterday, weird