Mon, Oct 5, 2020
Read in 1 minutes
There are both nefarious and less advert entertainment oriented use cases for generating Ai media. Both image and motion picture formats. Both uses have the potential to effect the psychology of it's viewers. Because of this, we propose a labeling system to separate the release of official media from any potential deep fake counter parts. Consumers must be able to properly recognize and process verified content they are viewing. IPFS could be a front runner in storing the content.
No scenario is more frightening than that of nation-state cyber wargames using deepfakes to manipulate foreign or domestic actors. The nefarious use of deepfake technology could instigate war, effect election outcomes or stir up animosity towards a nation’s enemy or ally alike.
This is not a hypothetical situation any longer. General Adversarial Networks (GANs) have rapidly advanced since their inception in 2014.
Internationally there have already been a few cases of deepfake generated Ai used to effect political discourse, or claims that video that surfaced was a deepfake so as to deny authenticity.
When content is stored on the IPFS network, it is hashed and the hash is unique to the metadata associated with what ever content is stored there. Any change to that data and you get a different hash. The IPFS network is a peer-to-peer network that is designed to be a distributed, decentralized, and secure file system.
Communication coming from verified sources would share the hash on online and print publications that’s stored on IPFS. This could be used to create a reputation system for the content.