As for the Threshold effect, I fail to understand how it applies to AI. Suppose that person A uses ChatGPT and person B uses Claude. Unlike messengers and social media platforms, I don’t see any network effects[1] preventing A from trying out B’s favorite LLM and vice versa.
Governance would likely require data portability at most, so that migration would become trivial.
Finally, suppose that the USA and China did achieve perfect governance over their respective labs OpenBrain and DeepCent. How would this prevent the race to the bottom which happens in AI-2027?
@StanislavKrym Great questions! Honestly, I have mixed views on this, but my central position is that age verification and bans are unlikely to be effective in the way they’re being presented. However, I do think it will be extremely effective at eroding privacy and anonymity online (and a leadingsource of future data breaches). But I don’t think it will fundamentally protect kids.
To be candid, I don’t think protecting kids was the driving motivation for many of the proponents. Governments, and especially the facial recognition and age-verification providers, who lobbied heavily for these laws, want data, and it’s much easier to garner support if efforts are wearing the cape of protecting kids/fighting terrorism vs. admitting that they’re keen to build a database of everyone’s biometric info.
As to your second question, I think you’re right to a certain extent, and that’s what makes tech extensity different from classical horizontal monopolies. Tech extensity doesn’t require a single entity to set or control the market. The network effects, as they are, come from the fact that to function in society at all, you’ll eventually be forced to use one of the hyperscalers, because their products will be everywhere. I explore what this might look like in another post on my blog (https://insights.priva.cat/p/the-ladder-to-nowhere-how-openai). Kulveit, Douglas, Ammann, et al., discuss a related idea in their Gradual Disempowerment article.[1]
Data portability is useful, but I don’t think it will solve the underlying problem. To analogize, you have ‘data portability’ with respect to the internet, in the sense that you can migrate your data offline to a filing cabinet. But migrating your data doesn’t make life easier, because so much of the systems in our lives depend on the internet.
To your last point: I’m not sure I follow exactly? If a button could be pressed, and the tech powers were constrained (or better still, roadblocks were put up to protect agency, autonomy, choice, etc.) that would act as a bulwark against companies enshittifing by adding friction. It’s similar to how data silos in government prevent or at least slow down unconstrained data abuses.
For example, I think the experts are correct to predict that there will be a tax shortfall in the US, after DOGE dismantled data silos and disregarded privacy protections that previously kept the IRS from sharing data with ICE.
I wasn’t aware of this when I wrote this piece in February. @Katalina Hernandez shared it with me yesterday, but there are overlaps with my larger point. I have … some thoughts on why their approach misses the mark, however.
What would you say about many countries trying to prevent kids from creating accounts in social media?
As for the Threshold effect, I fail to understand how it applies to AI. Suppose that person A uses ChatGPT and person B uses Claude. Unlike messengers and social media platforms, I don’t see any network effects[1] preventing A from trying out B’s favorite LLM and vice versa.
Governance would likely require data portability at most, so that migration would become trivial.
Finally, suppose that the USA and China did achieve perfect governance over their respective labs OpenBrain and DeepCent. How would this prevent the race to the bottom which happens in AI-2027?
With the exception of attempts to spread rumors like “xAI is 7 months behind, not 3 months behind” or “Meta is severely behind, since the rumors were created before Meta released Muse Spark”.
@StanislavKrym Great questions! Honestly, I have mixed views on this, but my central position is that age verification and bans are unlikely to be effective in the way they’re being presented. However, I do think it will be extremely effective at eroding privacy and anonymity online (and a leading source of future data breaches). But I don’t think it will fundamentally protect kids.
To be candid, I don’t think protecting kids was the driving motivation for many of the proponents. Governments, and especially the facial recognition and age-verification providers, who lobbied heavily for these laws, want data, and it’s much easier to garner support if efforts are wearing the cape of protecting kids/fighting terrorism vs. admitting that they’re keen to build a database of everyone’s biometric info.
As to your second question, I think you’re right to a certain extent, and that’s what makes tech extensity different from classical horizontal monopolies. Tech extensity doesn’t require a single entity to set or control the market. The network effects, as they are, come from the fact that to function in society at all, you’ll eventually be forced to use one of the hyperscalers, because their products will be everywhere. I explore what this might look like in another post on my blog (https://insights.priva.cat/p/the-ladder-to-nowhere-how-openai). Kulveit, Douglas, Ammann, et al., discuss a related idea in their Gradual Disempowerment article.[1]
Data portability is useful, but I don’t think it will solve the underlying problem. To analogize, you have ‘data portability’ with respect to the internet, in the sense that you can migrate your data offline to a filing cabinet. But migrating your data doesn’t make life easier, because so much of the systems in our lives depend on the internet.
To your last point: I’m not sure I follow exactly? If a button could be pressed, and the tech powers were constrained (or better still, roadblocks were put up to protect agency, autonomy, choice, etc.) that would act as a bulwark against companies enshittifing by adding friction. It’s similar to how data silos in government prevent or at least slow down unconstrained data abuses.
For example, I think the experts are correct to predict that there will be a tax shortfall in the US, after DOGE dismantled data silos and disregarded privacy protections that previously kept the IRS from sharing data with ICE.
Am I missing a larger point?
I wasn’t aware of this when I wrote this piece in February. @Katalina Hernandez shared it with me yesterday, but there are overlaps with my larger point. I have … some thoughts on why their approach misses the mark, however.