Smiley face
Weather     Live Markets

AI-Powered Kissing Apps Raise Concerns About Consent and Deepfake Normalization

A new wave of AI-powered apps is flooding social media platforms like TikTok and Instagram, offering users the ability to create fake videos of people kissing, without their consent. These apps, marketed as tools to "kiss anyone you want," raise serious ethical and safety concerns regarding the normalization of deepfakes and potential for misuse.

The apps function by allowing users to upload photos of any two individuals, which the AI then uses to generate a video of them kissing. While the resulting videos may not be explicitly sexual, experts warn that they can be just as harmful as AI-generated pornography, as they depict individuals engaging in intimate acts without their knowledge or permission. Haley McNamara, an executive at the National Center for Sexual Exploitation, emphasizes that "it does not have to be explicit to be exploitative." Creating non-consensual intimate content, whether kissing or undressing, represents a clear boundary violation.

A Forbes investigation revealed that Meta ran over 2,500 ads for these "AI kissing" apps across Facebook and Instagram, with about 1,000 still active. TikTok also displayed around 1,000 such ads to European users. Many of these ads featured celebrities like Scarlett Johansson, Emma Watson, and Gal Gadot, raising concerns about the unauthorized use of their likeness. While Meta maintains that these ads do not violate their policies, TikTok removed the ads after being contacted by Forbes, citing a violation of their consent policy for advertising featuring public figures.

Beyond kissing apps, Meta also promoted "AI hugging" apps, showing AI-generated videos of children hugging cartoon characters and even deceased relatives. These ads, while seemingly innocuous, contribute to the normalization of AI-generated intimate content and raise questions about the potential for manipulation and exploitation of vulnerable populations, including children.

The widespread availability of these apps, coupled with social media’s viral nature, is rapidly mainstreaming deepfake technology. Experts fear that this trend could pave the way for more harmful applications, such as the creation of deepfake pornography and other forms of image-based sexual abuse. McNamara describes the situation as “an absolute Pandora’s box,” highlighting the potential for escalating misuse.

The ease with which these apps can create realistic fake videos raises serious concerns about the erosion of trust in online content. As deepfake technology becomes increasingly sophisticated and accessible, distinguishing real from fake becomes increasingly challenging, potentially leading to widespread misinformation and the manipulation of public opinion.

The proliferation of AI-generated child sexual abuse material (CSAM) further underscores the dangers of this technology. The National Center for Missing and Exploited Children (NCMEC) has reported a significant increase in reports of AI-generated child exploitation material in recent years. This alarming trend highlights the urgent need for stronger regulations and safeguards to prevent the misuse of AI technologies for harmful purposes.

The normalization of non-consensual intimate imagery through these apps trivializes serious violations of privacy and consent. While some may dismiss these apps as harmless fun, it is crucial to recognize the potential for harm and the broader implications for online safety and trust. The widespread availability and promotion of such apps contribute to a culture where the manipulation and exploitation of individuals through AI-generated content becomes increasingly acceptable.

The escalating prevalence of AI-generated intimate content, fueled by social media platforms and readily available apps, demands immediate attention. Stronger policies and regulations are needed to protect individuals from non-consensual exploitation and to prevent the further normalization of deepfakes. The potential for misuse of this technology is vast and alarming, requiring a proactive approach to mitigate the risks before they escalate further. The responsibility lies with both social media companies and app developers to prioritize ethical considerations and implement robust safeguards to prevent the creation and dissemination of non-consensual intimate content. Furthermore, increased public awareness and education are crucial to combatting the normalization of deepfakes and promoting responsible use of AI technologies.

Share.