The first thing I can remember is that I learned at age 3 that I would die someday, and I cried about it. I got my hopes up about radical technological progress (including AGI and biotech) extending lifespan as a teenager, and I lost most of that hope (and cried the most I ever had in my life) upon realizing that AGI probably wouldn’t save us during our lifetime, alignment was too hard.
In some sense this outcome isn’t worse than what I thought was fated at age 3, though. I mean, if AGI comes too soon, then I and my children (if I have them) won’t have the 70-80 year lifetimes I expected, which would be disappointing; I don’t think AGI is particularly likely to be developed before my children die, however (minority opinion around here, I know). There’s still some significant chance of radical life extension and cognitive augmentation from biotech assisted by narrow AI (if AGI is sufficiently hard, which I think it is, though I’m not confident). And as I expressed in another comment, there would be positive things about being replaced by a computationally massive superintelligence solving intellectual problems beyond my comprehension; I think that would comfort me if I were in the process of dying, although I haven’t tested this empirically.
Since having my cosmic hopes deflated, I have narrowed the scope of my optimization to more human-sized problems, like creating scalable decentralized currency/contract/communication systems, creating a paradigm shift in philosophy by re-framing and solving existing problems, getting together an excellent network of intellectual collaborators that can solve and re-frame outstanding problems and teach the next generation, and so on; these are still ambitious, and could possibly chain to harder goals (like alignment) in the future, but are more historically precedented than AGI or AI alignment. And so I appear to be very pessimistic (about my problem-solving ability) compared to when I thought I might be able to ~singlehandedly solve alignment, but still very optimistic compared to what could be expected of the average human.
Reconciling “view from nowhere” with “view from somewhere”, yielding subject-centered interpretations of physics and interpretations of consciousness as relating to local knowledge and orientation
Interpreting “metaphysics” as about local orientation of representation, observation, action, etc, yielding computer-sciencey interpretations of apparently-irrational metaphysical discourse (“qualia are a poor man’s metaphysics”)
The first thing I can remember is that I learned at age 3 that I would die someday, and I cried about it. I got my hopes up about radical technological progress (including AGI and biotech) extending lifespan as a teenager, and I lost most of that hope (and cried the most I ever had in my life) upon realizing that AGI probably wouldn’t save us during our lifetime, alignment was too hard.
In some sense this outcome isn’t worse than what I thought was fated at age 3, though. I mean, if AGI comes too soon, then I and my children (if I have them) won’t have the 70-80 year lifetimes I expected, which would be disappointing; I don’t think AGI is particularly likely to be developed before my children die, however (minority opinion around here, I know). There’s still some significant chance of radical life extension and cognitive augmentation from biotech assisted by narrow AI (if AGI is sufficiently hard, which I think it is, though I’m not confident). And as I expressed in another comment, there would be positive things about being replaced by a computationally massive superintelligence solving intellectual problems beyond my comprehension; I think that would comfort me if I were in the process of dying, although I haven’t tested this empirically.
Since having my cosmic hopes deflated, I have narrowed the scope of my optimization to more human-sized problems, like creating scalable decentralized currency/contract/communication systems, creating a paradigm shift in philosophy by re-framing and solving existing problems, getting together an excellent network of intellectual collaborators that can solve and re-frame outstanding problems and teach the next generation, and so on; these are still ambitious, and could possibly chain to harder goals (like alignment) in the future, but are more historically precedented than AGI or AI alignment. And so I appear to be very pessimistic (about my problem-solving ability) compared to when I thought I might be able to ~singlehandedly solve alignment, but still very optimistic compared to what could be expected of the average human.
Oh? Do say more
Mostly scalable blockchain systems at this point, I have some writing on the problem hosted at gigascaling.net.
What paradigm shift are you trying to create in philosophy?
The sort of thing I write about on my blog. Examples:
Attention to “concept teaching” as a form of “concept definition”, using cognitive science models of concept learning
“What is an analogy/metaphor” and how those apply to “foundations” like materialism
Reconciling “view from nowhere” with “view from somewhere”, yielding subject-centered interpretations of physics and interpretations of consciousness as relating to local knowledge and orientation
Interpreting “metaphysics” as about local orientation of representation, observation, action, etc, yielding computer-sciencey interpretations of apparently-irrational metaphysical discourse (“qualia are a poor man’s metaphysics”)
Sounds interesting. Hopefully, I come back and read some of those links when I have more time.