The Deep Impact of Fake Media on Women’s Cybersafety

By Rakesh Maheshwari and Atulya Gupta

Earlier last year, the digital world was taken aback when sexually explicit deepfakes of American pop star Taylor Swift went viral on X (formerly known as Twitter). Within just 19 hours, these manipulated images amassed over 27 million views and more than 260,000 likes before the account responsible was suspended. This incident underscores the evolving technology landscape and associated online risks, particularly as they relate to women.  

Artificial Intelligence (AI) has brought about remarkable advancements across various sectors such as education and healthcare. However, as is true for every new technology, AI comes with its own set of challenges, deepfakes being one such example. Deepfakes are AI-generated media in which a person in an existing image or video is replaced with someone else’s likeness. These have not only facilitated the spread of misinformation but have also led to breaches of privacy and harassment, particularly targeting women. 

Gender-based Targeting and Sexual Exploitation: Deepfakes are more than just digital trickery; they have real-world consequences. A 2019 study by Deeptrace, an Amsterdam-based cybersecurity company, highlighted the gendered nature of this threat. Analysing content from the top five deepfake pornography websites, the study found the following results:

  • 100% of the deepfake pornography targeted women. 

  • 99% of these women were actresses and musicians, while the remaining 1% were from the news and media sector. 

  • The majority of targets were from the USA (41%), with a small percentage from India (3%).

The following incidents evince the multi-faceted impact of deepfakes on women. 

  • Celebrities: Indian celebrities have also fallen prey to such exploitation. Just last year, a video of actor Rashmika Mandanna entering a lift in a swimsuit went viral. However, this was a deepfake, with the original video featuring British actress Zara Patel.

  • Pornography Industry: Deepfake technology has been used in the pornography industry to create explicit content involving celebrities, further exacerbating issues of consent and exploitation.

In the recent past, an account on a major deepfake sexual abuse website posted 12 celebrity videos based on footage from GirlsDoPorn, a defunct site involved in sex trafficking. These videos, which gained up to 15,000 views each before being taken down following a WIRED magazine inquiry, used a face-swapping tool to add celebrity faces to the original footage. 

From 2012 to 2019, the creators of GirlsDoPorn ran a sex trafficking operation, recruiting young women through Craigslist advertisements for supposed modelling photoshoots. Once the women responded, they were coerced into making pornographic videos. The perpetrators falsely assured them that the videos would only be sold on DVDs outside the U.S. but instead posted clips online, including on Pornhub, and full-length videos on their website. 

Revenge Pornography: While deepfakes are artificially generated, a disturbing trend in recent times is revenge pornography. It involves the sharing of privately shared intimate images or videos of individuals without their consent, often by disgruntled ex-partners or malicious individuals seeking to malign the reputation of their victims. In 2021, a man from Kochi was arrested for posting revenge pornography on Facebook, seeking to “avenge” the end of an extramarital affair. 

Revenge pornography has also been weaponized to target women from specific communities and suppress their voices in public fora. In 2021-22, two apps called “Sulli Deals” and “Bulli Deals” (Sulli and Bulli are slurs for Muslim women) surfaced in India, which were used to “auction” Muslim women online using their personal photos from their social media profiles without their consent. These apps targeted vocal female Muslim activists, actors, journalists, and politicians which led to a few of them even deleting their social media accounts. 

Psychological Impact: The psychological impact of deepfakes and revenge pornography on women, particularly within the conservative fabric of the Indian society, can be devastating. The invasive and violating nature of such media triggers intense feelings of anger and distress. Victims experience a significant drop in self-confidence. The trauma inflicted can lead to a pervasive sense of fear, as women worry about the continuous exploitation and its repercussions. 

In a society where sex remains a taboo subject, victims frequently lack parental support, exacerbating their despair. This isolation, compounded by the relentless public scrutiny and judgement, can push victims into severe depression. In the absence of effective recourse and support, some women, feeling utterly hopeless and overwhelmed, tragically resort to suicide as a final escape from their suffering.

A 2020 survey by The Economist Intelligence Unit across 51 countries reported the following findings: 

  • 92% of women reported that online violence harmed their sense of well-being. 

  • 50% of women felt that the internet is not a safe space to share their thoughts, leading to self-censorship and reduced participation in public discourse. 

  • The emotional and mental health effects were severe, 43% of women felt unsafe and 35% experienced emotional harm.

  • The impact wasn’t limited to the digital space with 10% women experiencing physical harm and 7% facing job loss.

Safeguards and Other Measures

  • Legal Safeguards: To combat these perils, various legal safeguards have been put in place:

  1. The Information Technology Act, 2000 addresses issues like cheating by impersonation (Section 66D), publishing nude images online (Section 66E), and circulating obscene material online (Sections 67 and 67A). 

  2. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 require intermediaries to appoint Grievance Officers [Rule 3(2)(a)] and take down reported deepfakes or intimate images and videos within 24 hours [Rule 3(2)(b)].

  3. The Digital Personal Data Protection Act, 2023 imposes an obligation on tech platforms to secure personal data [Section 8(5)].

  • Helplines: Women can reach out on the following numbers in case of distress or emergency  - 112 (police, ambulance, fire), 181 (women’s helpline) and 1930 (cyber crime helpline).

  • Awareness: The Ministry of Electronics and Information Technology conducts various training and awareness programmes under the Information Security Education and Awareness (ISEA) initiative. 

  • StopNCII: Stop Non-Consensual Intimate Image abuse is a platform designed for people concerned about the potential leak of their intimate images or who have been victims of the same. Participating companies use the data provided to identify and remove any flagged content from their platforms.    

Staying abreast with the relevant laws, helplines and reporting mechanisms is crucial!

Combating Deepfakes

  • AI Tools: Interestingly, AI itself offers solutions to counter deepfakes. Tools like Intel’s FakeCatcher, Sentinel, Deepware AI, Sensity AI, and Microsoft’s Video Authenticator Tool are designed to detect manipulated media with high accuracy. From looking for blood flow in the pixels of a video to searching for grayscale elements invisible to the human eye and blending boundaries, these softwares employ varied processes to flag deepfakes.  

  • Initiatives by Tech Platforms: Social media platforms have also taken steps to address the issue. Users can report deepfakes on grounds of sexually explicit content, which are then reviewed and removed. Further, they have deployed tools and technology to identify and prevent republishing of removed content. Platforms like Facebook, Instagram, and YouTube have implemented policies and tools to curb the spread of deepfakes.

About Rakesh Maheshwari

Mr. Maheshwari is a former Government Officer, having worked for more than 35 years in the Ministry of Electronics and Information Technology (“MeitY”).  He is an Engineer in Electronics and Communication by qualification from the Delhi College of Engineering.

During his tenure at the Ministry, he handled matters related to the Information Technology (IT) Act, Personal Data Protection Bill, regulation of social media, cyber security policies, Aadhaar Act, among others. He has also served as the Group Coordinator for Cyber Law, Cyber Security, CERT-In and UIDAI. 

With MeitY being  the custodian of the IT Act, Mr. Maheshwari has been instrumental in the development of  the IT (Intermediary Guidelines  and Digital Media Ethics Code) Rule, 2021 and its amendments. He is well known for his work on online safety and has been instrumental in the launch of the National Cyber Crime portal, among other initiatives by the Government of India.


About Atulya Gupta

Atulya Gupta is the Head of Public Policy and Advocacy at Kaio, a public policy and strategic communications firm. A graduate from National Law University Delhi, he began his career in the business and legal affairs team at Clean Slate Filmz, where he worked on critically acclaimed projects such as Kohrra and Paatal Lok. Passionate about the intersection of cinema, law, and policy discourse, Atulya is particularly committed to crafting culturally rooted and creative approaches to policy campaigns to drive social impact.