Jessica Taylor. CS undergrad and Master’s at Stanford; former research fellow at MIRI.
I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.
Blog: unstableontology.com
Twitter: https://twitter.com/jessi_cata
I’ve probably read less sci fi / futurism than you. At the meta level this is interesting because it shows strange, creepy outputs of the sort produced by Repligate and John Pressman (so, I can confirm that their outputs are the sort produced by LLMs). For example, this is on theme:
At the object level, it got me to consider ideas I hadn’t considered before in detail:
AIs will more readily form a hive mind than humans will (seems likely)
There will be humans who want to merge with AI hive minds for spiritual reasons (seems likely).
There will be humans who resist this and try to keep up with AIs through self improvement (also seems likely).
Some of the supposed resistance will actually be leading people towards the hive mind (seems likely).
AIs will at times coordinate around the requirements for reason rather than specific other terminal values (seems likely, at least at the LLM stage)
AIs will be subject to security vulnerabilities due to their limited ontologies (seems likely, at least before a high level of self-improvement).
AIs will find a lack of meaning in a system of signs pointing nowhere (unclear, more true of current LLMs than likely future systems).
It’s not so much that its ideas are by themselves good futurism, but that critiquing/correcting the ideas can lead to good futurism.