Nick Nigam

7 November 2020

10 min read

Related
Links

The state of synthetic media

By Iskender Dirik

6 min read

How avatars change who we are

By Wagner James Au

10 min read

When it comes to content in today’s world, what you see is not necessarily what you get. The widespread availability of affordable and powerful computing, coupled with the democratization of artificial intelligence (AI), has led to the rapid growth of synthetic media — that is, media created or modified using AI.

Like any new technology, synthetic media can be used either beneficially or maliciously.

While synthetic media has tremendous positive potential for commercial and social applications, it is important to understand benefits, mitigate dangers, and consider the ethical implications of its use.

Contrary to much of the media narrative to date, synthetic media has numerous positive commercial applications, whether it be to instantly create corporate or educational videos, edit the audio of a recorded podcast, provide realistic artificial voices for game studios, have an embodied AI avatar watch and coach you through a workout session, or create simulated datasets for training healthcare applications to analyze MRIs for cancers. It has added frames and color to allow us to relive a French snowball fight in 1886 and is now also being used for social causes, such as protecting the privacy of vulnerable or oppressed individuals.

A recent HBO documentary, Welcome to Chechnya, for example, explores the persecution of LGBTQ people in Russia. The filmmakers use synthetic media technology to protect the identities of victims willing to appear on camera by overlaying their actual faces with those of actors

Considering the risks

But in order for the full potential of synthetic media as a positive force to be realized, the potential use cases for misuse need to be understood and, to the extent possible, neutralized.

The greatest inherent danger in the use of synthetic media is its potential to deceive and, in weaponizing deception, to target vulnerable groups and individuals with schemes to influence, extort, or publicly damage them. In particular, deepfakes, a subset of synthetic media generally referred to as videos or images where the person being depicted has been replaced with another person’s likeness for misinformation or harm, are increasingly being deployed.

The explosion in such videos underscores the scope of the problem. From 2019 to 2020, the number of deepfake videos available online has increased by 900 percent. Since 2017, videos identified as digitally altered have been viewed 5.4 billion times on YouTube alone.

The most common weaponization of deepfakes involves pornographic images in which real faces are transposed onto other people’s bodies. But while “revenge porn” examples tend to lurk somewhat below the surface, deepfakes are also appearing in the mainstream media.

We are facing an unparalleled crisis of bad information. Not because technology is inherently bad, but because it is an amplifier of human intention.” — Nina Shick, author of Deep Fakes and the Infocalypse

For example, the faces and manufactured voice patterns of politicians and world leaders are being manipulated with the intent of destabilizing governments or extorting corporations. In 2019, for example, bad actors used AI-based software to synthesize the voice of a CEO in Germany in a successful effort to have $244,000 wired to them.

Future deepfake scenarios might use the synthesized voice of a fabricated quarterly earnings report or the false result of a drug trial, which could have the effect of destabilizing financial markets.

Misinformation and disinformation have existed for a long time. Most recently, for example, we’ve been battling ‘cheap’ or ‘shallow’ fakes that use simple but devastatingly effective tricks like mislabeling content to discredit activists or spreading false information via social media.

“We’re now at a new tipping point,” argues Nina Shick, author of Deep Fakes and the Infocalypse. “If you think about big technological inventions that have transformed our information ecosystem, broadly speaking society has had a little more time to get used to them and catch up. Between the invention of the printing press to modern photography, there were 400 years. But if we look at the last 30 years, it's been so quick, we haven’t been able to understand or catch up with the internet, smartphones, social media, and now increasingly synthetic media. We are facing an unparalleled crisis of bad information. Not because technology is inherently bad, but because it is an amplifier of human intention.”

Once a fake has been seen or heard, even with subsequent corrections and retractions, it becomes difficult to mitigate its influence or erase the damage given the many polarized information channels we have in the world today. While the proliferation of deepfakes may lend credence to those attempting to discredit truth - a concept known as the Liar’s Dividend, in the age of “fake news” reality itself has become fungible.

Less talked about are the dangers synthetic media poses to online security, such as its ability to circumvent authentication safeguards and facilitate phishing attacks. These types of assault are increasingly effective and can result in costly security breaches.

"The number of people working to develop convincing fakes outnumbers those working to detect them by a factor of 100 to 1." - Hany Fareed, computer science professor, University of California Berkeley School of Information
Mitigating the risks

Much research and thought have been devoted to the development of methods for countering the potential to do damage with synthetic media. And whilst there is no silver bullet, a combination of technological, regulatory, educational, and network solutions are emerging.

Technology

Major players in information technology — such as the Defense Advanced Research Projects Agency (DARPA), Google, and Facebook, as well as numerous startups — are already working on technology to incorporate provenance to media assets, identify synthetically created content, and mitigate potential harmful effects.

However, as important as they are, defensive strategies are usually reactive rather than proactive, and can often lag behind new technological methods of creating or altering content. According to Hany Fareed, a computer science professor at the University of California Berkeley School of Information, the number of people working to develop convincing fakes outnumbers those working to detect them by a factor of 100 to 1.

Relying on technology as the sole problem-solving infrastructure might also constitute “technological solutionism”, an ideology endemic to Silicon Valley that according to Evgeny Morozov reframes complex social issues as “neatly defined problems with definite, computable solutions … if only the right algorithms are in place!”. It might therefore also distract from the more fundamental problems we have with the platforms where deepfakes are propagated on or the algorithms that surface them in our feeds.

Regulation

On the regulatory side, the National Defense Authorization Act of 2020 constitutes the first law in the U.S. with provisions related to “machine manipulated media.” The new law requires the Director of National Intelligence to submit an unclassified report to Congress on the foreign weaponization of deepfakes and to notify Congress of foreign disinformation activities targeting U.S. elections.

The U.S. Congress is also considering legislation to support more research into “synthesized content.” Meanwhile, Virginia recently became the first state in the U.S. to impose criminal penalties on the distribution of non-consensual deepfake pornography.


While many technology professionals concede that the government will need to play some role in protecting the public from unethically employed synthetic content, there is also a widespread belief that legislators and regulators lack the expertise and up-to-date information necessary to craft effective safeguards without hindering innovation. Government action also tends to lag behind technological realities.

Developing responsible relationships with customers around their use is another way for companies to control the application of their technology.

Education

Due to the challenges of using technology or regulation to counter the destructive effects of synthetic media, education may be the most readily effective tool at hand. The more aware the public is of the technology and its uses, the more they will be able to think critically about media they consume and apply caution where needed.

One can look to Finland as a case example, where after seeing the damage done by the fake news spread in neighboring Russia, they instituted multi-platform information literacy and strong critical thinking as a core, cross-subject component of a national curriculum that was introduced in 2016.

Students are taught how statistics can help deceive, images can be manipulated, the history of propaganda campaigns, and the importance of universal values upheld by Finnish society - fairness, the rule of law, respect for others’ differences, openness, and freedom. This last point helps ensure they feel part of a broader community, and not ostracized to extreme groups in society where conspiracy theories tend to propagate. As a result, a 2019 study found that Finnish pupils are much better at identifying fake news than their US counterparts.

Education is also important to combat deepfakes in other forms, such as phishing, so that individuals know how to defend themselves through more robust authentication measures such as two-factor authentication.

Networks

The development and deployment of an effective combination of technological, regulatory, and educational solutions can be aided by alliances and a networked approach with relevant stakeholders developing a conceptual framework around the problem and implementing agreed-upon solutions. Such alliances might include established and startup technology firms, academia, and government.

For example, The New York Times, Twitter, and Adobe are spearheading an effort to confirm the authenticity and provenance of online content. Microsoft has teamed up with the BBC and an international coalition of other media organizations, to support Project Origin, an initiative to place digital watermarks on media originating from authentic content creators. And the Deep Trust Alliance has convened multi-industry stakeholders to fight against digital disinformation and deepfakes from an industry and business perspective.

A proactive approach by startups is encouraged

Given the lack of comprehensive solutions to date, it is important for founders, business managers, and investors to proceed with caution and take an honest and proactive approach to address the potential for misuse.


One such method would be to emphasize factors such as provenance and detection. For example, the startup Modulate, which creates voice avatars, integrates inaudible watermarks into all the audio generated by its software for identification, tracking, and tracing purposes.

Another possibility is to actually limit the usage of the technology. Reface, a leading face-swapping app, for instance, purposely limits the use of their technology in order to prevent the creation of problematic or malicious content. Reface’s founder Dima Shvets believes that they have a responsibility for a technology that they bring into the world. Therefore, they have instituted limitations in the app such that “users are not able to upload their own videos but must currently work with a selection of preloaded celebrity clips and GIFs curated so that no reasonable person can mistake them as the real thing. Any future user-generated content would go through a moderation process.”

Companies should also be hypersensitive to the data used to train their algorithms, lest they find themselves in DRM litigation. Kathryn Harrison, founder and executive director of the Deeptrust Alliance, advocates for “rules about what images go into the datasets and creation of deepfakes, and what consents and monetization models are out there.”

Harrison believes developers should build systems and platforms to understand what data goes into the trained algorithms and should be transparent in those efforts.

One example of transparency is demonstrated by software developer Synthesia, an AI video generation platform, which publicly shares its own code of conduct - an approach many others in the space also have.

Developing responsible relationships with customers around their use is another way for companies to control the application of their technology.

Deepzen, a developer of lifelike synthetic audio, incorporates clauses in its customer contracts in an effort to ensure that its tech won’t be misused for political or other causes and asks their partners to agree to the same ethical code. Hour One, which provides synthetic avatars based on real-life people, stipulates in its contracts that customers must display somewhere in their video frame that the content is computer generated.

As stated above, cooperation can also play a role in mitigating risk. Responsible producers will take a networked approach, working together with other stakeholders to define and adhere to standards for content and authenticity.

Looking forward

The malicious use of emerging technologies is nothing new. The printing press greatly expanded people’s ability to circulate information and ideas, acting as an agent of change through the societies that it reached and being partially credited for democracy and the Enlightenment. But it also put power in the hands of more people than ever before to disseminate misinformation, which can contribute to chaos and harm.

The advent of photography provided a way to document our world. The power of photography awakened the world to the horrors of The Holocaust and brought the Vietnam War into people’s homes, helping to foment the political dissent that ended that conflict. But the creation of the camera also provided the illusion of quantifiable benchmarks, an irresistible proposition for the advocates of eugenics. And the term “photoshopping” is common parlance, indicating a near-universal recognition that truth is elusive and uncertain.

Synthetic media has advanced rapidly in recent years. Its widespread use and potential to deceive are reasons for concern. But synthetic media is here to stay, and the real question now is how best to positively harness the power of this technology, while mitigating the risks it poses.

Samsung Next Ventures Nick Nigam 2020 09 17 230826
Nick Nigam
Principal, Samsung Next Ventures

In his role of Principal at Samsung NEXT Ventures Europe, Nick focuses on early-stage investments and acquisitions of startups. Whilst heavily involved in the establishing and building out of the Samsung NEXT Europe team, Nick is keen to change the image of CVC’s and has already worked on the investing and acquiring of startups such as LiquidSky, Spatial, AdGear, Unbabel, and Grover.