First, I'd like to thank all of my readers -- existing and new ones. Some have shared insightful comments on blog posts. Second, the last post of 2018 features a topic we will probably hear plenty about during 2019: artificial intelligence (AI) technologies.
"... retailers seem much more bullish on artificial intelligence, with 7% already using some form of AI in digital assistants or chatbots, and most (64%) planning to have implemented AI within the next three years, 21% of those within the next 12 months. The top reason for using AI in retail is personalization (42%), followed by pricing and promotions (31%), landing page optimization (15%) and fraud detection (21%)."
Like any other online (or offline) technology, AI can be used for good and for bad. The good guys and bad actors both have access to AI technologies. MotherBoard reported:
"There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation... The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together."
You may remember Gadot from the 2017 film, "Wonder Woman." Other actors have been victims, too. Where do bad actors get tools to make AI-assisted fake porn? The fake porn with Gadot was:
"... allegedly the work of one person—a Redditor who goes by the name 'deepfakes'—not a big special effects studio... deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning. Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we're on the verge of living in a world where it's trivially easy to fabricate believable videos of people doing and saying things they never did... the software is based on multiple open-source libraries, like Keras with TensorFlow backend. To compile the celebrities’ faces, deepfakes said he used Google image search, stock photos, and YouTube videos..."
"... an anonymous online community of creators has in recent months removed many of the hurdles for interested beginners, crafting how-to guides, offering tips and troubleshooting advice — and fulfilling fake-porn requests on their own. To simplify the task, deepfake creators often compile vast bundles of facial images, called “facesets,” and sex-scene videos of women they call “donor bodies.” Some creators use software to automatically extract a woman’s face from her videos and social-media posts. Others have experimented with voice-cloning software to generate potentially convincing audio..."
This is beyond bad. It is terrifying.
The implications: many. Video, including speeches can easily be faked. Fake porn can be used as a weapon to harass women and/or to discredit accusers of sexual abuse and/or battery. Today's fake porn could be tomorrow's fake videos and fake news to discredit others: politicians, business executives, government officials (e.g., judges, military officers, etc.), individuals in minority groups, or activists. This places a premium upon mainstream news outlets to provide reliable, trustworthy news. This places a premium upon fact-checking sites.
The consequences: several. Social media users must first understand that they have made themselves vulnerable to the threats. Parents have made both themselves and their children vulnerable, too. How? The photographs and videos you've already uploaded to Facebook, Instagram, dating apps, and other social sites are source content for bad actors. So, parents must not only teach teenagers how to read terms-of-condition and privacy polices, but also how to fact-check content to avoid being tricked by fake videos.
This means all online users must become skilled consumers of information and news = read several news sources, verify, and fact check items. Otherwise, you are likely to be fooled... duped into joining or contributing to a bogus cause... tricked into voting for someone you wouldn't. This means social media users must carefully consider your photographs before you post online; and whether the social app or service truly provides effective privacy.
It also means that all social media users should NOT retweet or re-post every sensational item you see in their inboxes without fact-checking it first. Otherwise, you are part of the problem. Be part of the solution.
Video advertisements can easily be faked. So, it is in the interest of consumers, companies, and government agencies to both find solutions and to upgrade online privacy and digital laws -- which seem to constantly lag behind new technologies. There probably needs to be stronger consequences for offenders.
"In order to maximize positive outcomes [from AI], organizations should hire ethicists who work with corporate decision-makers and software developers, have a code of AI ethics that lays out how various issues will be handled, organize an AI review board that regularly addresses corporate ethical questions, have AI audit trails that show how various coding decisions have been made, implement AI training programs so staff operationalizes ethical considerations in their daily work, and provide a means for remediation when AI solutions inflict harm or damages on people or organizations."
These recommendations seems to apply to social media sites, which are high-value targets for bad actors wanting to post fake porn or other fake videos. It raises the question: which social sites have AI ethics policies and/or have hired ethicists and related staff to enforce such policies?
To do nothing seem unwise. Sticking our collective heads in the sane regarding new threats seems unwise, too. What issues concern you about AI-assisted fake porn or fake videos? What solutions do you want?