Maybe the balance of power has changed. Politicians need to win in democratic elections. Democratic elections are decided by people who spend a lot of time online. The tech companies can nudge their algorithms to provide more negative information about a selected politician, and more positive information about his competitors. And the politicians know it.
Banning Trump on social networks, no matter how much some people applauded it for tribal reasons, sent a strong message to all politicians across the political spectrum: you could be next. At least banning is obvious, but getting the negative news about you on the first page of Google results and moving the positive news to the second page, or sharing Facebook posts from your haters and hiding Facebook posts from your fans would be more difficult to prove.
The government takeover of tech companies would require bipartisan action prepared in secret. How much can you prepare something secret if the tech companies own all your communication means (your messages, the messages of your staff), and can assign an AI to compile the pieces of information and detect possible threats?
I think there are considderations like these that could prevent government from being in charge, but the default scenario from here is that they do exert control over AGI in nontrivial ways.
Interesting points. I think you’re right about an influence to do what tech companies want. This would apply to some of them—Google and Meta—but not OpenAI or Anthropic since they don’t control media.
I don’t think government control would require any bipartisan action. I think the existing laws surrounding security would suffice, since AGI is absolutely security-relevant. (I’m no law expert, but my GPT4o legal consultant thought it was likely). If it did require new laws, those wouldn’t need to be secret.
Maybe the balance of power has changed. Politicians need to win in democratic elections. Democratic elections are decided by people who spend a lot of time online. The tech companies can nudge their algorithms to provide more negative information about a selected politician, and more positive information about his competitors. And the politicians know it.
Banning Trump on social networks, no matter how much some people applauded it for tribal reasons, sent a strong message to all politicians across the political spectrum: you could be next. At least banning is obvious, but getting the negative news about you on the first page of Google results and moving the positive news to the second page, or sharing Facebook posts from your haters and hiding Facebook posts from your fans would be more difficult to prove.
The government takeover of tech companies would require bipartisan action prepared in secret. How much can you prepare something secret if the tech companies own all your communication means (your messages, the messages of your staff), and can assign an AI to compile the pieces of information and detect possible threats?
I think there are considderations like these that could prevent government from being in charge, but the default scenario from here is that they do exert control over AGI in nontrivial ways.
Interesting points. I think you’re right about an influence to do what tech companies want. This would apply to some of them—Google and Meta—but not OpenAI or Anthropic since they don’t control media.
I don’t think government control would require any bipartisan action. I think the existing laws surrounding security would suffice, since AGI is absolutely security-relevant. (I’m no law expert, but my GPT4o legal consultant thought it was likely). If it did require new laws, those wouldn’t need to be secret.