Data privacy risks behind AI art

The charm of Studio Ghibli’s dreamy animation style has found a new digital canvas —AI-generated illustrations that transform ordinary photographs into whimsical portraits. It’s a trend that has swept across the internet, delighting users with pastel filters and fantastical backdrops. But as a legal practitioner deeply involved in data privacy and emerging technologies, I must caution – this playful aesthetic is concealing a far more serious legal undercurrent.

AI tools that create Ghibli-style art by processing user-submitted images are not merely engaging with creativity—they are operating at the heart of legal fault lines concerning biometric data, informed consent, identity misuse, and cross-border data transfers. 

Beneath the aesthetic: What users don’t see

These AI platforms use neural style transfer algorithms and diffusion models to extract content from uploaded photos—blending them with anime-inspired styles. However, a photograph is not just a visual—it is a repository of biometric identifiers, facial structure, and often hidden metadata such as GPS coordinates and timestamps.

Despite reassuring disclaimers, most platforms remain opaque about what actually happens to the data. Is it permanently deleted? Used to train future models? Stored in third-party servers abroad? These questions remain unanswered—and legally consequential.

Where the law draws the line: A legal analysis

  1. Consent that isn’t informed is no consent at all

Under the European Union’s General Data Protection Regulation (GDPR) and India’s Digital Personal Data Protection Act, 2023 (DPDPA), any processing of personal data must be based on explicit, informed, and freely given consent.

Unfortunately, most of these platforms fail to meet this threshold. Their terms of service are often vague, buried in dense legalese, or rely on “default consent” models—invalid under both GDPR Article 6 and DPDPA Sections 4 and 7.

The absence of clear opt-outs or granular control mechanisms—especially when biometric data is involved—constitutes a blatant disregard for established privacy norms.

  1. Biometric data: high-risk and heavily regulated

Facial images are classified as sensitive personal data. Processing such data without adequate safeguards invites liability under:

  • GDPR Article 9 – which prohibits biometric data processing except under strict conditions.
  • California’s CCPA/CPRA – where biometric data is recognised as “sensitive information,” necessitating disclosures and opt-out rights.
  • India’s DPDPA Sections 28 & 33 – which penalise unauthorised processing of sensitive personal data.

Let us be clear: transforming a selfie or picture into a Ghibli-style avatar is not a harmless act of artistic expression. It is the extraction and transformation of biometric identity—often without sufficient legal backing.

  1. The fiction of deletion: Why right to be forgotten” remains elusive

Under GDPR Article 17 and DPDPA Section 12(3), individuals possess the right to erasure—the so-called “Right to Be Forgotten.” Yet once an AI model has been trained on user images, technical erasure becomes a near impossibility unless the model is retrained—an impractical expectation.

Moreover, most platforms lack mechanisms for users to retract their data, delete outputs, or even verify how and where their information is stored.

  1. Identity misappropriation and personality rights

India’s constitutional jurisprudence has firmly recognised personality rights under Article 21. In Justice K.S. Puttaswamy v. Union of India, the Supreme Court upheld the right to control one’s image and digital presence. What begins as a stylised portrait may end up as marketing material, merchandise, or even manipulated content. In such cases, Sections 66C and 66D of the IT Act (identity theft and impersonation) may also be attracted.

The problem of opacity and cross-border transfers

Many of these AI platforms operate from the US or EU, while serving global users. This raises concerns under GDPR’s Chapter V and India’s DPDPA Section 16 regarding unlawful cross-border data transfers without adequate safeguards.

Users are rarely informed where their data is going, who has access, or whether third-party vendors are involved. Such systemic opacity undermines the core tenets of modern data protection regimes.

Recommendations: Towards responsible AI and Legal accountability

As someone deeply engaged in litigation and policy surrounding data rights, I believe the time has come for both legislative clarity and corporate responsibility. Here’s what must be done:

  • Mandatory transparency: Platforms must disclose training data sources and whether personal content will be reused.
  • Informed, specific consent: Broad or bundled consents should be outlawed. Consent must be tailored, limited, and revocable.
  • Data minimisation and metadata scrubbing: Platforms should be required to strip metadata and store minimal information necessary.
  • Right to erasure enforcement: Regulators must ensure that users can meaningfully retract their data.
  • Model registries: Authorities should maintain registries of AI systems, including their training inputs and intended uses.

Final word: The cost of convenience

The viral popularity of Ghibli-style AI art tools reveals a cultural blind spot—where convenience, creativity and virality consistently overshadow legal awareness. We must not allow the spectacle of visual delight to eclipse the sanctity of individual privacy and consent.

While AI innovation is here to stay, it must not be allowed to grow in a regulatory vacuum. If unchecked, today’s filters may become tomorrow’s surveillance tools. And what begins as digital whimsy may soon turn into legal dystopia.

Let us not forget: in a democracy governed by law, creativity is welcome—but not at the cost of rights.

Linkedin
Disclaimer

Views expressed above are the author's own.

END OF ARTICLE