OpenAI plans to introduce a ‘TikTok-like’ short form video product, using Sora to generate the platform’s content.
I would like to encourage people, to set a Yoda Timer, and think about their personal policy, when it comes to this type of algorithmic consumption; that is, a highly addictive app, that can-presumably-generate content tailored to very niche subsets of people.
My thoughts (read after your Yoda timer):
I think it is likely quite a dangerous thing to try once, and plan to avoid even taking a peek at an app like this. Much the same way I don’t take a little hit of fent, just to see what it’s like.
A thought I have just had now, is that it would be beneficial for OpenAI to steer user interests into the same area, for the purpose of minimizing the amount of videos they must generate to keep users engaged.
For example: Alice starts out liking Dog Videos, and Bob starts out liking Cat Videos. It would be cheaper for OpenAI, if Alice and Bob liked the same type of videos, and it would free up compute to be used on other tasks. So, they would have intensive to shift the interests of Alice and Bob to the same place-For our example perhaps, Bird Videos would work. But, given the state of short form video feeds atm, I expect what the Algorithm finds in ‘things that keep lots of users very engaged space’ is actually more harmful than Bird Videos.
Why are you so certain it’s dangerous to try once even at the beginning? My guess is that it won’t immediately be particularly compelling, but get more so over time as they have time to do RL on views or whatever they are trying to do.
But I also have a large error bar. This might, in the near future, be less compelling than either of us expect. It’s genuinely difficult to make compelling products, and maybe Sora 2 isn’t good enough for this.
I’m more concerned about Youtube Shorts to be honest, in the long term.
Nobody is actually going to use it. The general public has already started treating AI-generated content as pollution instead of something to seek out. Plus, unlike human-created shortform videos, a video generated by a model with a several-months-ago (at best) cutoff date can’t tell you what the latest fashion trends are. The release of Sora-2 has led me to update in favor of the “AI is a bubble” hypothesis because of how obviously disconnected it is from consumer demand.
EDIT: Apparently, degrading business took over not just Meta with its AI companions and xAI whose owner was dumb enough to avoid caring about safety in the slightest, but one of the three companies which was supposed to create the ASI and align it to human values. What’s next, the loss of Google DeepMind or Anthropic? Or outright AI takeover in the name of preserving human values?
OpenAI plans to introduce a ‘TikTok-like’ short form video product, using Sora to generate the platform’s content.
I would like to encourage people, to set a Yoda Timer, and think about their personal policy, when it comes to this type of algorithmic consumption; that is, a highly addictive app, that can-presumably-generate content tailored to very niche subsets of people.
My thoughts (read after your Yoda timer):
I think it is likely quite a dangerous thing to try once, and plan to avoid even taking a peek at an app like this. Much the same way I don’t take a little hit of fent, just to see what it’s like.
I wrote more about this-in a fiction exploration type way-when I wrote “GTFO of the Social Internet Before You Can’t”.
A thought I have just had now, is that it would be beneficial for OpenAI to steer user interests into the same area, for the purpose of minimizing the amount of videos they must generate to keep users engaged.
For example: Alice starts out liking Dog Videos, and Bob starts out liking Cat Videos. It would be cheaper for OpenAI, if Alice and Bob liked the same type of videos, and it would free up compute to be used on other tasks. So, they would have intensive to shift the interests of Alice and Bob to the same place-For our example perhaps, Bird Videos would work. But, given the state of short form video feeds atm, I expect what the Algorithm finds in ‘things that keep lots of users very engaged space’ is actually more harmful than Bird Videos.
Response to your thoughts after the yoda timer
Why are you so certain it’s dangerous to try once even at the beginning? My guess is that it won’t immediately be particularly compelling, but get more so over time as they have time to do RL on views or whatever they are trying to do.
But I also have a large error bar. This might, in the near future, be less compelling than either of us expect. It’s genuinely difficult to make compelling products, and maybe Sora 2 isn’t good enough for this.
I’m more concerned about Youtube Shorts to be honest, in the long term.
My prediction:
Nobody is actually going to use it. The general public has already started treating AI-generated content as pollution instead of something to seek out. Plus, unlike human-created shortform videos, a video generated by a model with a several-months-ago (at best) cutoff date can’t tell you what the latest fashion trends are. The release of Sora-2 has led me to update in favor of the “AI is a bubble” hypothesis because of how obviously disconnected it is from consumer demand.
Why, in the name of Chapin Lenthall-Cleary, did they announce the platform?!
EDIT: Apparently, degrading business took over not just Meta with its AI companions and xAI whose owner was dumb enough to avoid caring about safety in the slightest, but one of the three companies which was supposed to create the ASI and align it to human values. What’s next, the loss of Google DeepMind or Anthropic? Or outright AI takeover in the name of preserving human values?