I want you to figure out what personalized AI fiction/music/poetry/film should look like and create a bunch of it for our community
Curious what the story for this being particularly important are? (obviously I see why it’s an intuitively Raemon-shaped one, not sure if it was more like “this seems actively good” or “idk I wanna reroll Ray on something-or-other and this vaguely matches his vibe”)
Lightcone mostly works on different projects, 1-2 people to a team and it’s pretty normal to divide us up on various projects. And we mostly do our strategizing at a level that includes “maybe put LW on maintenance mode”. In practice we obviously keep deciding “keep working on LW” but it’s often only 1-2 core team members particularly focusing on it.
So, broadly it seems “within Lightcone paradigm” to divide everyone up and send them on random projects.
I agree I do basically want Oliver designing a political philosophy for the AI age.
FYI my current prioritization for myself for this year is “work on ‘Influency’ things, such as the followup to AI 2027 and whatever other things we can find that seem tractiony, in attempt to wake up the world and get people thinking ’okay, how we navigate AI really can’t be politics-as-usual.”
I have considered “engage with the broader world about AI art and meaningmaking in the next few decades” (but which I’d lump under “Influency.”)
I think last year and this one is a limited window of time to do Influency AI things with much leverage, and I expect next year to shift back towards “look for ways to differentially accelerate illegible AI alignmenty stuff”
(I’m not sure about the sign of AI 2027, I think the sign of the followup looks more obviously positive if it gets traction but I think it’s reasonable to disagree on that.)
FYI my current prioritization for myself for this year is “work on ‘Influency’ things, such as the followup to AI 2027 and whatever other things we can find that seem tractiony, in attempt to wake up the world and get people thinking ’okay, how we navigate AI really can’t be politics-as-usual.”
I feel very meh about “wake up the world”, firstly because AI capabilities companies are going to do it for us, and secondly because whether it’s good or bad depends a lot on the quality of what we funnel the world towards, and right now we really don’t have many robustly good things to funnel the world towards (we don’t even really have good things to funnel EAs towards).
Also, you can’t rely on people who need to be “woken up” right now to actually do high-quality thinking about this stuff.
Hence I also disagree with “I think last year and this one is a limited window of time to do Influency AI things with much leverage”.
Curious what the story for this being particularly important are? (obviously I see why it’s an intuitively Raemon-shaped one, not sure if it was more like “this seems actively good” or “idk I wanna reroll Ray on something-or-other and this vaguely matches his vibe”)
Story something like “imagine if meaning-making becomes 100x easier in the next few years than it is today. We sure would want people trying hard to make a lot of meaning!” I think lumping this under “influency” things is self-defeating though, you actually need to be trying to solve some problem (and then the influence may come later) rather than trying to cater to what you think other people want from you.
I feel very meh about “wake up the world”, firstly because AI capabilities companies are going to do it for us, and secondly because whether it’s good or bad depends a lot on the quality of what we funnel the world towards
These two points seem contradictory. AI capabilities companies aren’t going to do the good thing for us. (Or maybe you think they are? But I’m a bit surprised if you think that.)
Yeah, good point. My intended synthesis here is that AI companies “wake up the world” in the sense of getting everyone to pay attention, but struggle to then command the resulting narratives. So (as in the wake of ChatGPT) there’s a lot of room for good narratives after capabilities breakthroughs. But our narratives are still pretty weak. So work that’s intended to “wake up the world” should mostly be focusing on figuring out what you’d say if you had the world’s attention rather than getting the world’s attention.
Yeah I get that you disagreed with the frame, just was noting “this is my current crux.”
I guess actually it sounds (from various past bits of convo I’ve heard from you) that you have a fairly different theory-of-existential-safety-victory than me, and I’m not sure if you’ve written it up anywhere. Have you?
My guess is that not having long-term full ownership means that actual working on these projects goes less deep, e.g., when being assigned for a few weeks or months to typically narrower tasks.
Curious what the story for this being particularly important are? (obviously I see why it’s an intuitively Raemon-shaped one, not sure if it was more like “this seems actively good” or “idk I wanna reroll Ray on something-or-other and this vaguely matches his vibe”)
Lightcone mostly works on different projects, 1-2 people to a team and it’s pretty normal to divide us up on various projects. And we mostly do our strategizing at a level that includes “maybe put LW on maintenance mode”. In practice we obviously keep deciding “keep working on LW” but it’s often only 1-2 core team members particularly focusing on it.
So, broadly it seems “within Lightcone paradigm” to divide everyone up and send them on random projects.
I agree I do basically want Oliver designing a political philosophy for the AI age.
FYI my current prioritization for myself for this year is “work on ‘Influency’ things, such as the followup to AI 2027 and whatever other things we can find that seem tractiony, in attempt to wake up the world and get people thinking ’okay, how we navigate AI really can’t be politics-as-usual.”
I have considered “engage with the broader world about AI art and meaningmaking in the next few decades” (but which I’d lump under “Influency.”)
I think last year and this one is a limited window of time to do Influency AI things with much leverage, and I expect next year to shift back towards “look for ways to differentially accelerate illegible AI alignmenty stuff”
(I’m not sure about the sign of AI 2027, I think the sign of the followup looks more obviously positive if it gets traction but I think it’s reasonable to disagree on that.)
I feel very meh about “wake up the world”, firstly because AI capabilities companies are going to do it for us, and secondly because whether it’s good or bad depends a lot on the quality of what we funnel the world towards, and right now we really don’t have many robustly good things to funnel the world towards (we don’t even really have good things to funnel EAs towards).
Also, you can’t rely on people who need to be “woken up” right now to actually do high-quality thinking about this stuff.
Hence I also disagree with “I think last year and this one is a limited window of time to do Influency AI things with much leverage”.
Story something like “imagine if meaning-making becomes 100x easier in the next few years than it is today. We sure would want people trying hard to make a lot of meaning!” I think lumping this under “influency” things is self-defeating though, you actually need to be trying to solve some problem (and then the influence may come later) rather than trying to cater to what you think other people want from you.
These two points seem contradictory. AI capabilities companies aren’t going to do the good thing for us. (Or maybe you think they are? But I’m a bit surprised if you think that.)
Yeah, good point. My intended synthesis here is that AI companies “wake up the world” in the sense of getting everyone to pay attention, but struggle to then command the resulting narratives. So (as in the wake of ChatGPT) there’s a lot of room for good narratives after capabilities breakthroughs. But our narratives are still pretty weak. So work that’s intended to “wake up the world” should mostly be focusing on figuring out what you’d say if you had the world’s attention rather than getting the world’s attention.
Yeah I get that you disagreed with the frame, just was noting “this is my current crux.”
I guess actually it sounds (from various past bits of convo I’ve heard from you) that you have a fairly different theory-of-existential-safety-victory than me, and I’m not sure if you’ve written it up anywhere. Have you?
My guess is that not having long-term full ownership means that actual working on these projects goes less deep, e.g., when being assigned for a few weeks or months to typically narrower tasks.