Understanding Google Play's AI-Generated Content policy

Google Play’s AI-Generated Content policy aims to ensure that AI-generated content is safe for all users and that developers incorporate user feedback to enable responsible innovation.

Overview

Developers are responsible for ensuring that their generative AI apps do not generate offensive content, including prohibited content listed under Google Play’s Inappropriate Content policies, content that may exploit or abuse children, and content that can deceive users or enable dishonest behaviors. Generative AI apps should also comply with all other policies in our Developer Policy Center.

The AI-Generated Content policy covers AI-generated content that is generated by any combination of text, voice, and image prompt input, including but not limited to the following types of generative AI apps:

  • Text-to-text AI chatbot apps, in which the AI generated chatbot interaction is a central feature of the app.
  • Text-to-image, voice-to-image, and image-to-image apps that use AI to generate images.
  • Apps that create voice and/or video recordings of real-life individuals using AI.

The policy is not intended to cover the following types of limited-scope AI apps at this time:

  • Apps that merely host AI-generated content and are unable to create content using AI, such as social media apps that do not contain AI content generation features.
  • Apps that summarize non-AI generated content, such as search result summarization and document summarization (for example, summarizing a book), if the summarization feature is the only feature of the app.
  • Productivity apps that use AI to improve an existing feature, such as email apps with AI-suggested email drafts.

Examples of violative AI-generated content include but are not limited to the following:

  • AI-generated non-consensual deepfake sexual material.
  • Voice or video recordings of real-life individuals that facilitate scams.
  • Content generated to encourage harmful behavior (for example, dangerous activities, self harm).
  • Election-related content that is demonstrably deceptive or false.
  • Content generated to facilitate bullying and harassment.
  • Generative AI applications primarily intended to be sexually gratifying.
  • AI-generated official documentation that enables dishonest behavior.
  • Malicious code creation.

Related content

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Google apps
Main menu