Nick Nigam

6 November 2020

3 min read

Consumer protection protects consumers from threats like deepfakes, identity theft, and biometric data collection. Synthetic media generation of photorealistic avatars and actors can be highly beneficial in cost, labor, and time, but comes with considerable risks of fraudulent content creation such as deepfakes.

Deepfakes are a subset of synthetic media generally referred to as videos or images where the person being depicted has been replaced with another person’s likeness for misinformation or harm. Many customer protection companies are automating the process of detecting and deleting harmful deepfake content, as well as authenticating photos and videos at the time of capture.

As a part of its focus on Synthetic Media Landscape technology, Samsung Next identified five leading companies that are developing different video authentication or identity protection softwares to safeguard consumers.

Brighter AI's mission is to “protect every identity in public” through image and video anonymization. One of its products, Precision Blur, automates the blurring of human faces. Meanwhile, its Deep Natural Anonymization product is a privacy solution that uses generative AI. The software generates new images or videos from pre-existing ones. It can be used to replace images, such as faces and license plates, with synthetic, natural-looking overlays.

Customer Profile:
B2B, Automotive, Smart City, Data Labeling, Retail

D-ID’s AI technology protects photos from being identified by facial recognition technologies by changing key human characteristics. The AI technology is programmed to adjust the size, shape, or color of recognizable features, to make a person familiar, but not identifiable. The company’s smart video anonymization product anonymizes faces while preserving key attributes. This enables the collection of anonymized data that is compliant with privacy regulations.D-ID’s software has been used in the production of commercials, and documentaries, and production for de-identification of faces.

Customer Profile:
B2B, Automotive, Healthcare, Smart Cities, CCTV, Media, Entertainment

Sensity is a visual threat intelligence company. It provides individuals and organizations with solutions for detecting, monitoring, and countering threats posed by deepfakes. The company's technology detects visual threats targeting individuals and organizations in real-time, and provides customers with threat analysis and severity assessment alerts. Its real-time API combines computer vision and video forensics to provide accurate and scalable deepfake detection that can be integrated with applications like YouTube, dating apps, and social media.

Customer Profile:
B2B, B2C, Identity Protection, Threat Detection, Fraud Detection, Individual Fraud Detection, Enterprise

Sentinel has developed an AI detection and authentication software platform for government agencies, media outlets, and defense agencies. The software analyzes video footage to detect if media was generated using AI, or if it was authentically sourced. If the software determines a video contains a deepfake video or image, a visualization of the AI manipulations are shown through linear graphs that mark the degree of certainty of video authenticity. This is done by running the media in question through four different detection methods, each method producing a different rating of certainty and the end number being a total average. Sentinel was founded by ex-NATO and cybersecurity experts and is focused on developing technology for internet authentication and security.

Customer Profile:
B2B, Defense, Security, Government Agencies, Media Outlets

Cyabra is a deep learning and neuro-linguistic programming (NLP) company that harvests media, detects and identifies the use of deepfake AI technology, and removes the deepfake threat. The Cyabra process works to identify patterns of fake profiles known as bots or trolls. Data collected using Cyabra is transformed into visual metrics, such as data charts, graphs, and linear trends, that identify the components of a fake campaign such as number of fake users, amount of money invested in the threat, and a higher probability of revealing the cause of fake activity. The data can also predict if a false campaign is influencing users on social media, if the information is defamatory, and whether or not the effort was sponsored by a company, agency, or if it was independent.

Customer Profile:
B2B, Enterprise, Marketing, Security, Threat Detection

What it all means

Synthetic media’s consumer protection companies address the threat of deepfakes and manipulated media that can be weaponized to harm vulnerable persons, public figures,companies, or us all. Protection institutions are using AI to identify synthesized content and provide anonymization of biometric data to safeguard consumers. These companies are helping to safeguard major technological advancements in synthetic media.

To learn more, visit