The link doesn’t work for me.
Interesting! But I downvoted since it’s a comment, not an answer.
Better, but I still think “myopia” is basically misleading here. I would go back to the drawing board *shrug.
Maybe some, but I think that’s a bit besides the point… I agree there’s a genuine trade-off, but my post was mostly about AF.I’m mostly in LW/AF for AI Alignment content, and I think these posts should strive to be a bit closer to academic style.A few quick thoughts:- include abstracts- say whether a post is meant to be pedagogic or not- say “you can skip this section if”- follow something more like the format of an academic paper- include a figure towards the top that should summarize the idea for someone with sufficient background with a caption like “a summary of [idea]: description / explanation”
It seems a bit weird to me to call this myopia, since (IIUC) the AI is still planning for future impact (just not on other agents).
As an academic, I typically find LW/SF posts to be too “pedagogic” and not skimmable enough. This limits how much I read them. Academic papers are, on average, much easier to extract a TL;DR from. Being pedagogic has advantages, but it can be annoying if you are already familiar with much of the background and just want to skip to the (purportedly) novel bits.
I think the contradiction may only be apparent, but I thought it was worth mentioning anyways. My point was just that we might actually want certifications to say things about specific algorithms.
Second, we can match the certification to the types of people and institutions, that is, our certifications talk about the executives, citizens, or corporations (rather than e.g. specific algorithms, that may be replaced in the future). Third, the certification system can build in mechanisms for updating the certification criteria periodically.
* I think effective certification is likely to involve expert analysis (including non-technical domain experts) of specific algorithms used in specific contexts. This appears to contradict the “Second” point above somewhat.* I want people to work on developing the infrastructure for such analyses. This is in keeping with the “Third” point.* This will likely involve a massive increase in investment of AI talent in the process of certification.
As an example, I think “manipulative” algorithms—that treat humans as part of the state to be optimized over—should be banned in many applications in the near future, and that we need expert involvement to determine the propensity of different algorithms to actually optimize over humans in various contexts.
I see that is has references to papers from this year, so presumably has been updated to reflect any changes in view.
I wonder if gwern has changed their view on RL/meta-learning at all given GPT, scaling laws, and current dominance of training on big offline datasets. This would be somewhat in line with skybrian’s comment on Hacker News: https://news.ycombinator.com/item?id=13231808
One of the major, surprising consequences of this is that it’s likely to become infeasible to develop software in secret.
Or maybe developers will find good ways of hacking value out of cloud hosted AI systems while keeping their actual code base secret. e.g. maybe you have a parallel code base that is developed in public, and a way of translating code-gen outputs there into something useful for your actual secret code base.
Honest question: Why are people not concerned about 1) long COVID and 2) variants?
Is there something(s) that I haven’t read that other people have? I haven’t been following closely...
My best guess is:
1) There’s good reason to believe vaccines protect you from it (but I haven’t personally seen that)
2) We’ll hear about them if they start to be a problem
1⁄2) Enough people are getting vaccinated that rates of COVID and infectiousness are low, so it’s becoming unlikely to be exposed to a significant amount of it in the first place.
From that figure, it looks to me like roughly 0 protection until day 10 or 11, and then near perfect protection after that. Surprisingly non-smooth!