I think that everything could (can and ought to) be framed as an ‘audition.’ The reframing was useful when viewing observable actions as always having some cost or output. And when you view every action or inaction as an ‘audition,’ it connotes a sort of weight to one’s actions, and (at least for the few I’ve talked to) encourages a sense of intention. I have a longer draft with flags for hyperlinks, but the shape of the framework is as follows:
Person sees value in attaining access to or possession of a thing (a Goal)
If observed/functional gap between Person’s current skill and Goal:
Person attempts feat, fails (based on own measure, external feedback, or lack thereof)
Depending on level of motivation/incentive/pressure to continue, user continues, and gains insight on ways to decrease distance between Current Skill and Goal.
Inspired, person learns, iterates, and tries again (back to Step 1)
If no observed/functional gap between Person’s current skill and Goal:
Person attempts and successfully performs feat, achieves Goal
Inspired by success, person sets sights on new Goals. (back to Step 1)
Examples include: Many cross-media artists, entrepreneurs, dilettantes that eventually learned how to apply skills learned across their career to make diagonal leaps in status, etc.
There are a lot of anti-decel and accelerationist takes, and as an optimistic person I love them!
”We couldn’t fathom product managers in the 1940s!” “Imagine having an email job when half your kids could die in the coal mines” “You know, they didn’t have the concept of a weekend back then!”
I think that the takes are true, and there’s a genuine chance of transformative change that works out well within our lifetimes!
On the other hand, there’s a lot of uncertainty. I think that what gets lost is that people are talking past one another. Accelerationists and anti-AI groups speak around the transitional period between now and post-AGI utopia.
The transition from 2026 → that future is bound to happen slower in the labor market than politicians are willing to admit, much less respond to. I think this is the case because policy changes often take time, and 2020 COVID stimulus packages led to a lot of consumer spending that has now led to a consumer credit default crisis (moralizing aside, the reason why consumerism is a worthy cornerstone of an economy is because it can be socially infectious—see “the Jones effect”). Wages move slower than price increases too.
Frontier labs with a profit motive have a fiduciary obligation to build towards the valuations they’re trading at in private markets. Consumer spend is a backbone of the US’s GDP , so one can foresee a future where most people in consumerist economies have non-trivial amounts of non-mortgage debt in order to continue spending despite not being able to earn enough to own land.
Imagine this convo with the average person making less than $150k:
You: “Hey, you and most of the people in or below your tax bracket are probably going to lose your job.”
Them: “I’m sorry what?”
You: “I don’t know when, and I can’t tell you if you’ll be retrained, or if you’ll ever have the same earning power you have right now, if you’ll ever afford a home. Maybe you move back in with your family, or take on 6 figures of debt at the same time as 80% of the people your age.”
Them: “That sounds really bad.”
You: “But it’s alright! I can tell you what I’m working on will make all of this happen sooner. In the future, everything will be figured out!”
Them: “Will my family be safe?”
You: “If you survive, (maybe) the future will be beyond what your recovering mind will fathom!”
Them: “How do I stop that from happening?”
And when the conversation breaks down because you’re incentivized to accelerate that future, they’ll stop listening to you, and probably get radicalized on TikTok. That’s a really big problem.
If you assume [Time until Frontier Labs must become profitable] < [Time when AI eliminates labor scarcity], and you intend to be conscious during those intermediary days/weeks/years, you may consider that jailbroken models can teach a person how to make signal jammers, bioweapons, or missile launchers, and jeopardize the chance of you surviving until that Fully Automated Luxury [Capitalist] utopia is here. And that might make you worry.
Economic indicators keep changing. Goalposts move in order to hide unflattering economic stories like inflation or job loss (jobs numbers being released, and then revised from positive X to negative X; ‘unemployment’ being a cultural connotation, despite its denotation being a subset of (1 - [workers_in_labor_force]).[1]
What this means is that one can distract from the fact that collectively, the working force is feeling the consistent pressure of prices rising, wages going down, and mass layoffs. If the most active voters are homeowners whose wealth has been earned and put in retirement accounts (in other words, in the stock market), seeing the number go up on all fronts may feel like winning, even if it means your kids will have to sell the house to cover the cost of your elder care.
If a plurality of liquid gains from AI go to a fraction of the people that would have made life-changing wealth in prior boom/bust cycles, and the expectation that “most decamillionaires only need as many cars and homes as 2-5 people, even though they have the spending power of hundreds”, that could look a lot like a patronage economy. Startup founders, artists, and everyone in between would need to ingratiate themselves with holders of capital[2], and hope that those people want to spend the capital on their idea (or event, or startup, or art), and be paid for their services. And in a world that’s ratcheting up avenues of surveillance, suddenly private access to public surveillance is more likely to be used to assess whether or not a vassal is loyal.
If the US needs UBI to offset the dent in consumer spend that will occur as AI ripples through the services economy, then there’s the question of UBI’s fungibility:
If it’s directly distributed as some non-cash currency, we’ve reinvented plantation and corporate scrip. If the way SNAP and EBT are treated are any indication of how non-cash currencies in the US will be spent, then barring sweeping policy changes to make this new alt currency legal cash, presumably there will be fungibility issues that affect the exchange rate, social implications of spending/receiving it, etc.
If it’s legally as fungible like cash, we’ll need more policy work to effectively reward circulating more discretionary spend through the economy. As is, charitable donations do help, but they’ve likely already been taxed by the time individuals make donations, meaning the spending has been decreased.
If the government keeps printing money, inflation will drown out the consumer spend attributed to the dividend.
If it comes directly from businesses, it likely won’t be enough at scale[3].
I imagine that in order to offset the fact that a subset of the top 1% have to spend or disburse money towards consumer spending to keep the economy going (e.g. something like a non-profit tax deduction, for contributing to a person or group’s AI Dividend, or contract them for a job).
I wonder if the thing that becomes most scarce in this world, may be ‘debt-free capital’.
For those curious, look up the ‘employment-to-participation ratio’ in the US. It’s strange to think that somewhere between 50-60% of working age US persons being employed is the standard.
Before you say “Sounds like you need your customer to want to buy from you.” And I’m inclined to agree! And, I can see how some holders of capital don’t like certain unchangeable traits that others are born with. At scale, this becomes a deviation from the implicit notions of meritocracy that were prevalent when signaling woke virtue was more prevalent. A lot of people grew up assuming that the world was a little fairer than it was, and are now seeing people deliberately make it less fair.
I’m a little retarded, but I’m sure there’s a way to quantitatively say ‘I think that for every $10,000 earned in a consumer-credit driven economy, the person has 10,000 + x% of ‘spending power’, and that scales nonlinearly the more the person earns consistently, and the more assets they can borrow against.
I think that everything could (can and ought to) be framed as an ‘audition.’ The reframing was useful when viewing observable actions as always having some cost or output. And when you view every action or inaction as an ‘audition,’ it connotes a sort of weight to one’s actions, and (at least for the few I’ve talked to) encourages a sense of intention. I have a longer draft with flags for hyperlinks, but the shape of the framework is as follows:
Person sees value in attaining access to or possession of a thing (a Goal)
If observed/functional gap between Person’s current skill and Goal:
Person attempts feat, fails (based on own measure, external feedback, or lack thereof)
Depending on level of motivation/incentive/pressure to continue, user continues, and gains insight on ways to decrease distance between Current Skill and Goal.
Inspired, person learns, iterates, and tries again (back to Step 1)
If no observed/functional gap between Person’s current skill and Goal:
Person attempts and successfully performs feat, achieves Goal
Inspired by success, person sets sights on new Goals. (back to Step 1)
Examples include: Many cross-media artists, entrepreneurs, dilettantes that eventually learned how to apply skills learned across their career to make diagonal leaps in status, etc.
American healthcare is probably the biggest dark horse blocker to AI acceleration.
https://x.com/binarybits/status/2046299669773623567
This tweet has some snippets from this essay: https://aleximas.substack.com/p/what-will-be-scarce
There are a lot of anti-decel and accelerationist takes, and as an optimistic person I love them!
”We couldn’t fathom product managers in the 1940s!”
“Imagine having an email job when half your kids could die in the coal mines”
“You know, they didn’t have the concept of a weekend back then!”
I think that the takes are true, and there’s a genuine chance of transformative change that works out well within our lifetimes!
On the other hand, there’s a lot of uncertainty. I think that what gets lost is that people are talking past one another. Accelerationists and anti-AI groups speak around the transitional period between now and post-AGI utopia.
The transition from 2026 → that future is bound to happen slower in the labor market than politicians are willing to admit, much less respond to. I think this is the case because policy changes often take time, and 2020 COVID stimulus packages led to a lot of consumer spending that has now led to a consumer credit default crisis (moralizing aside, the reason why consumerism is a worthy cornerstone of an economy is because it can be socially infectious—see “the Jones effect”). Wages move slower than price increases too.
Frontier labs with a profit motive have a fiduciary obligation to build towards the valuations they’re trading at in private markets. Consumer spend is a backbone of the US’s GDP , so one can foresee a future where most people in consumerist economies have non-trivial amounts of non-mortgage debt in order to continue spending despite not being able to earn enough to own land.
Imagine this convo with the average person making less than $150k:
You: “Hey, you and most of the people in or below your tax bracket are probably going to lose your job.”
Them: “I’m sorry what?”
You: “I don’t know when, and I can’t tell you if you’ll be retrained, or if you’ll ever have the same earning power you have right now, if you’ll ever afford a home. Maybe you move back in with your family, or take on 6 figures of debt at the same time as 80% of the people your age.”
Them: “That sounds really bad.”
You: “But it’s alright! I can tell you what I’m working on will make all of this happen sooner. In the future, everything will be figured out!”
Them: “Will my family be safe?”
You: “If you survive, (maybe) the future will be beyond what your recovering mind will fathom!”
Them: “How do I stop that from happening?”
And when the conversation breaks down because you’re incentivized to accelerate that future, they’ll stop listening to you, and probably get radicalized on TikTok. That’s a really big problem.
If you assume [Time until Frontier Labs must become profitable] < [Time when AI eliminates labor scarcity], and you intend to be conscious during those intermediary days/weeks/years, you may consider that jailbroken models can teach a person how to make signal jammers, bioweapons, or missile launchers, and jeopardize the chance of you surviving until that Fully Automated Luxury [Capitalist] utopia is here. And that might make you worry.
Economic indicators keep changing. Goalposts move in order to hide unflattering economic stories like inflation or job loss (jobs numbers being released, and then revised from positive X to negative X; ‘unemployment’ being a cultural connotation, despite its denotation being a subset of (1 - [workers_in_labor_force]).[1]
What this means is that one can distract from the fact that collectively, the working force is feeling the consistent pressure of prices rising, wages going down, and mass layoffs. If the most active voters are homeowners whose wealth has been earned and put in retirement accounts (in other words, in the stock market), seeing the number go up on all fronts may feel like winning, even if it means your kids will have to sell the house to cover the cost of your elder care.
If a plurality of liquid gains from AI go to a fraction of the people that would have made life-changing wealth in prior boom/bust cycles, and the expectation that “most decamillionaires only need as many cars and homes as 2-5 people, even though they have the spending power of hundreds”, that could look a lot like a patronage economy. Startup founders, artists, and everyone in between would need to ingratiate themselves with holders of capital[2], and hope that those people want to spend the capital on their idea (or event, or startup, or art), and be paid for their services. And in a world that’s ratcheting up avenues of surveillance, suddenly private access to public surveillance is more likely to be used to assess whether or not a vassal is loyal.
If the US needs UBI to offset the dent in consumer spend that will occur as AI ripples through the services economy, then there’s the question of UBI’s fungibility:
If it’s directly distributed as some non-cash currency, we’ve reinvented plantation and corporate scrip. If the way SNAP and EBT are treated are any indication of how non-cash currencies in the US will be spent, then barring sweeping policy changes to make this new alt currency legal cash, presumably there will be fungibility issues that affect the exchange rate, social implications of spending/receiving it, etc.
If it’s legally as fungible like cash, we’ll need more policy work to effectively reward circulating more discretionary spend through the economy. As is, charitable donations do help, but they’ve likely already been taxed by the time individuals make donations, meaning the spending has been decreased.
If the government keeps printing money, inflation will drown out the consumer spend attributed to the dividend.
If it comes directly from businesses, it likely won’t be enough at scale[3].
I imagine that in order to offset the fact that a subset of the top 1% have to spend or disburse money towards consumer spending to keep the economy going (e.g. something like a non-profit tax deduction, for contributing to a person or group’s AI Dividend, or contract them for a job).
I wonder if the thing that becomes most scarce in this world, may be ‘debt-free capital’.
For those curious, look up the ‘employment-to-participation ratio’ in the US. It’s strange to think that somewhere between 50-60% of working age US persons being employed is the standard.
Before you say “Sounds like you need your customer to want to buy from you.” And I’m inclined to agree! And, I can see how some holders of capital don’t like certain unchangeable traits that others are born with. At scale, this becomes a deviation from the implicit notions of meritocracy that were prevalent when signaling woke virtue was more prevalent. A lot of people grew up assuming that the world was a little fairer than it was, and are now seeing people deliberately make it less fair.
I’m a little retarded, but I’m sure there’s a way to quantitatively say ‘I think that for every $10,000 earned in a consumer-credit driven economy, the person has 10,000 + x% of ‘spending power’, and that scales nonlinearly the more the person earns consistently, and the more assets they can borrow against.