5/​23

Link post

AI

Congress had a hearing on AI: Gary Marcus, Sam Altman, and Christina Montgomery.

My takeaways:

  1. Congress sees what happened with social media regulation as a mistake driven by a failure to regulate adequately. There is bipartisan unhappiness on this point. They see AI as analogous in many ways, though the harms vary.

  2. People are taking this seriously.

OpenAI’s thoughts on superintelligence.

Fun game /​ exercise in prompt engineering

Interesting piece on applying an environmental justice lens to algorithmic accountability. Interesting primarily because the authors are clearly social conservatives. AI is bad because it could “disrupt” existing social relations, without any serious interrogation of whether those disruptions have the possibility of being good. AI could also interfere with “attachment to particular territories”, which is not a particularly subtle critique of the global citizens that international media and easier translations produce. A general trend as new areas become salient is that people wind up with more inconsistent views, relative to the rest of their ideology. Whether it’s committed liberals suddenly concerned that licensing regulation could create a regulatory moat or conservatives horrified at the idea of companies getting to choose to avoid offending people and trying to make regulation to stop them, expect to see more ideological cross-pressures when innovation is high.

Relatedly, if you want one reason why the discussions around race and AI are so weird: white people were underrepresented among AI PhDs in 2020 and falling (AI PhD recipients, US population by age).

Excellent roundup of news on AI, if you’re into general news. I usually recommend against, but understanding what the median educated elite is reading can be useful.

Chapter from a philosophy book on how AI poses an existential risk. One of my big open questions is how to disentangle cohort effects and time effects for the spread of influential EAs. Some of it is clearly that people made plans in college and now they’re paying off. Some of it is clearly that EA concerns, particularly AI, are becoming more salient to the general public.

So, the EU is passing legislation on AI. Draft legislation as of May 5 is here. Here’s my favorite critical piece, published May 13. A very clear website is made by the Future of Life Institute, detailing strengths and weaknesses.

Personal

I will be in Oxford, London, and the Bay this month: please reach out if you want to connect.

I’ve written a few pieces, but nothing new for public consumption yet. Stay tuned, though.

No comments.