The other day I looking through the news and stumbled across this article that talks about a situation with a Ukrainian YouTuber who’s face and voice are being used to sell products on some Chinese social media platforms. While current law’s don’t allow you to trademark you face (as a result of copyright being only valid for man-made creative ventures), this might be something that changes in the near future as result of situations like this.

This conversation has been ongoing for a while, but the focus has historical been on notable political figures and large-scale influencers. Typically, these people already have a verified presence on some platform where people can interact with them and give them the oppourtunity to mark deepfake content as such. Sure, damage can still be done, but at least they have access to a large audience to dismiss such content quickly. Small creators don’t have that luxury. While they might not have a large personal-brand presence, but they do have a personal identity that could be ad jeopardy.

For the most part, deepfakes still have some tells that make them relatively easy to spot (though, who really knows if some of the content I’ve deemed real was deepfaked). TIme will only continue to reduce imperfections that are spotted by people until it’s virtually impossible to tell the difference between deepfaked content and real content. I just hope that we get some kind of countermeasures in place before we get to that point. There is already work in embedded “AI watermarks” in AI generated content, which I’m curious to see how it develops.