
New Delhi, April 16, 2026 – In the early months of 2026, the digital storefronts that power the world’s smartphones are facing a moral and regulatory reckoning. Despite years of “strict” policies prohibiting sexually explicit content, a series of explosive investigations has revealed that Apple’s App Store and the Google Play Store have hosted over 100 “nudify” apps—AI-powered tools designed to digitally “undress” people without their consent.
The scale of the problem is staggering. According to a landmark report by the Tech Transparency Project (TTP) released in early 2026, these apps have been downloaded more than 705 million times and have generated an estimated $117 million in lifetime revenue. Perhaps most damning is the fact that Apple and Google, through their standard 15% to 30% commission on in-app purchases, have effectively profited from the proliferation of these “digital abuse” tools.
Both tech giants have long maintained clear guidelines against harmful AI. Google’s policy explicitly bars apps that “degrade or objectify people,” while Apple prohibits content that is “offensive, insensitive, or just plain creepy.” Yet, the TTP investigation found 55 such apps on Google Play and 47 on the Apple App Store.
These apps often hide in plain sight using clever marketing tactics:
As the revenue figures—reaching upward of $122 million by mid-April 2026—continue to climb, critics are asking whether the financial benefit to the platforms has slowed their response.
“When a platform takes a 30% cut of a subscription for an app designed to violate someone’s dignity, they aren’t just a host; they are a business partner,” says Katie Paul, Director of the Tech Transparency Project.
Following the initial outcry in January 2026, Apple and Google performed a “mass purge,” removing dozens of flagged apps. However, researchers noted that new versions often reappeared within days, slightly rebranded but featuring the same underlying AI models. This “Whac-A-Mole” cycle suggests that the current review processes—largely reliant on human reports rather than proactive AI detection—are insufficient to stem the tide.
The rise of these apps isn’t just a technical glitch; it has fueled a global epidemic of Non-Consensual Intimate Imagery (NCII).
Governments are no longer waiting for Silicon Valley to self-regulate. 2026 has seen a wave of aggressive new laws:
| Metric | Estimated Impact (as of April 2026) |
| Total Identified Apps | 102+ |
| Total Downloads | 705 Million+ |
| Total Revenue Generated | $122 Million |
| Apple/Google Commission | ~$35 Million |
| Reported Teen Exposure | 11% of U.S. Teens (via Thorn Research) |
As of April 16, 2026, the battle continues. While Apple and Google have increased their use of automated “safety classifiers” to detect nudify logic in code, the “arms race” between developers and moderators shows no sign of slowing.
Experts suggest that true change will only come when platforms are held legally liable for the content they monetize. Until then, the apps that “strip the clothes off” innocent victims remain just a search away, tucked behind a “bikini editor” icon and a 4.5-star rating.
The message from advocates is clear: Policy without proactive enforcement is not a safeguard—it is a suggestion. And for the millions of victims of AI deepfakes, that suggestion has come far too late.