That strategy (assume everything is fake) removes the ability to learn about the world beyond what you personally can experience and verify. Fairly limiting.
Text has been plausibly fake since its invention yet we have devised ways of trusting it in certain circumstances, else you wouldn’t be on this forum. It puts the onus back on you to up-weight your belief on sources where you have high trust in the chain of custody of the information or other means you can concoct of trusting the source.
I think in the present ecology of the Internet, given the various possible costs and rewards of people you don’t know to jockey for attention queue position to put imagery in front of you, the status quo is that most of what is put in front of us is already adversarially placed there (e.g. for clicks, ads, propaganda) but in fact it presently feels not that because it’s expensive to fake, so we sometimes falsely assume it’s genuine. In a world where the cost of fakery goes to zero, it will be impossible to maintain the illusion of authenticity.
It’s worth separating the different concerns about “fake”. Like text, there’s been plenty of staged/incorrect/fictional audios and images since the invention of each medium. Lies are as old as speech.
The big concern recently is that the availability of this kind of media has become decentralized over the last few decades—it’s no longer a small amount of publishers that earn trust by being the primary way to channel such things, clearly labeled as to whether it’s fiction or not. This is combining with ease of fakery that removes other clues as to the provenance of such things to make almost nothing trustworthy.
Consider this subset of hierarchy of relevant states of the world, from good to bad:
Videos are inherently true and everyone knows it.
Videos can trivially be false and everyone knows it.
Videos can easily be false / misleading but most people think they’re true.
I think there is a case to be made that we are de-facto in state #3 now, but AI video gen will move us into state #2. While this is far worse than state #1, it’s an improvement rather than a deterioration (I used to be convinced it would be a deterioration, but am now updating my thinking).
Just to mention: once we are firmly in #2, then our trust in video should be, just like our trust in text, based almost entirely on our priors about the source of the content. This doesn’t mean we’ve repudiated text and similarly I don’t think it’ll mean we repudiate video.
It is debatable whether we are in state #3 now, but I feel like we mostly have been increasingly so for a number of years before lifelike AI video gen. It’s not that the pixels in the video were artificially generated, but the Darwinian ecology of the modern Internet meant that the videos most likely to land on the radar of someone who is not aggressively curating and playing defense are there for ulterior motives such as ads, clicks, whatever, for which purposes truth is nearly irrelevant. Like, if you approach content consumption without active measures to avoid being manipulated (and even then!) you are likely not taking away anything from the text or video that bombards you that approximates truth in any meaningful sense.
Worth noting that the transition from #3 to #2 is likely to be bumpy.
#2 and #3 depend heavily on what “trivially” and “easily” mean, and #3 for the reasons that most people have this clearly false belief. #2 and #3 are the same states of the world, just with a slightly different mix of fools.
My main point is that the deepfakepocalypse is not itself the main or only cause of the lack of trust in video evidence. This has been going on for a long time, and there are plenty of people who deny the veracity of some pretty-well-documented true photos from decades ago.
That strategy (assume everything is fake) removes the ability to learn about the world beyond what you personally can experience and verify. Fairly limiting.
Text has been plausibly fake since its invention yet we have devised ways of trusting it in certain circumstances, else you wouldn’t be on this forum. It puts the onus back on you to up-weight your belief on sources where you have high trust in the chain of custody of the information or other means you can concoct of trusting the source.
I think in the present ecology of the Internet, given the various possible costs and rewards of people you don’t know to jockey for attention queue position to put imagery in front of you, the status quo is that most of what is put in front of us is already adversarially placed there (e.g. for clicks, ads, propaganda) but in fact it presently feels not that because it’s expensive to fake, so we sometimes falsely assume it’s genuine. In a world where the cost of fakery goes to zero, it will be impossible to maintain the illusion of authenticity.
It’s worth separating the different concerns about “fake”. Like text, there’s been plenty of staged/incorrect/fictional audios and images since the invention of each medium. Lies are as old as speech.
The big concern recently is that the availability of this kind of media has become decentralized over the last few decades—it’s no longer a small amount of publishers that earn trust by being the primary way to channel such things, clearly labeled as to whether it’s fiction or not. This is combining with ease of fakery that removes other clues as to the provenance of such things to make almost nothing trustworthy.
Consider this subset of hierarchy of relevant states of the world, from good to bad:
Videos are inherently true and everyone knows it.
Videos can trivially be false and everyone knows it.
Videos can easily be false / misleading but most people think they’re true.
I think there is a case to be made that we are de-facto in state #3 now, but AI video gen will move us into state #2. While this is far worse than state #1, it’s an improvement rather than a deterioration (I used to be convinced it would be a deterioration, but am now updating my thinking).
Just to mention: once we are firmly in #2, then our trust in video should be, just like our trust in text, based almost entirely on our priors about the source of the content. This doesn’t mean we’ve repudiated text and similarly I don’t think it’ll mean we repudiate video.
It is debatable whether we are in state #3 now, but I feel like we mostly have been increasingly so for a number of years before lifelike AI video gen. It’s not that the pixels in the video were artificially generated, but the Darwinian ecology of the modern Internet meant that the videos most likely to land on the radar of someone who is not aggressively curating and playing defense are there for ulterior motives such as ads, clicks, whatever, for which purposes truth is nearly irrelevant. Like, if you approach content consumption without active measures to avoid being manipulated (and even then!) you are likely not taking away anything from the text or video that bombards you that approximates truth in any meaningful sense.
Worth noting that the transition from #3 to #2 is likely to be bumpy.
#1 is purely impossible and irrelevant.
#2 and #3 depend heavily on what “trivially” and “easily” mean, and #3 for the reasons that most people have this clearly false belief. #2 and #3 are the same states of the world, just with a slightly different mix of fools.
My main point is that the deepfakepocalypse is not itself the main or only cause of the lack of trust in video evidence. This has been going on for a long time, and there are plenty of people who deny the veracity of some pretty-well-documented true photos from decades ago.