The iPhone maker has removed generative artificial intelligence with capabilities for undressing
Apple has removed 3 AI apps in a move that may include more as a result of a trend among certain developers that have been promoting their products as able to create deepfake pictures that show people undressed.
Developers have been using social media to promote those products
Developers selling these AI apps have been promoting them on social media ads, particularly on Instagram. They make references to being able to “delete” clothing and other similar features. When Apple spotted these generative artificial intelligence applications and the claims they were making, they removed them from the App Store.
That said, they weren’t spotted until after a 404 Media investigation discovered that the applications could indeed be used to generate fake pictures of real people but without their clothes. The investigators from the company contacted Apple with the information, but reports suggest that the iPhone maker was sluggish to take any action. The investigators were required to provide Apple with the direct links to the applications on the App Store, as they said that Apple “was not able to find the apps that violated its policy itself,” according to a statement 404 Media released on the subject.
AI apps have been making deepfake images an increasingly commonplace issue
Deepfake pictures – including nudes – aren’t anything new, particularly on social media. However, with new generative artificial intelligence becoming readily available, the issue has taken off, affecting public figures and average people alike, depending on the goal of the user.
To help support victims of these images, US Rep Alexandria Ocasio-Cortez introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act.
The intention of the DEFIANCE Act is to provide a “federal civil remedy” for victims of deepfakes. There is also a companion bill that was introduced in the US Senate, but it appears to be stalled for the moment.
Apple isn’t the only company struggling to keep explicit deepfake AI apps and products under control. Google Play, Meta, and X have all been attempting to remove the production and distribution of that type of image, particularly when it involves real people, on their platforms. That said, X and Meta have been particularly criticized for their inconsistency when it comes to both regulations and enforcement.