Google Play Restricts Inappropriate Content in Generative AI Apps


Google AI Support Assistant

Google Play has updated its guidelines for AI applications to reduce “inappropriate” and “prohibited” content. The new policy aims to ensure that generative AI capabilities are used responsibly and do not generate content that is harmful or illegal. This update is significant as it outlines specific restrictions on the types of content that can be generated by AI applications.

Maps Generative AI

AI content restrictions

Overview of the New Policy

In the new policy, Google says that developers are responsible for preventing their generative AI apps from generating offensive content. This includes prohibited content listed under Google Play’s Inappropriate Content policies. All content that may exploit or abuse children and content that can deceive users or enable dishonest behaviours are not allowed. Examples of violative AI-generated content include non-consensual deepfake sexual material, content generated to encourage harmful behaviour, election-related content that is demonstrably deceptive or false, and content generated to facilitate bullying and harassment.

The policy covers a variety of generative AI apps, including text-to-text AI chatbot apps, text-to-image, voice-to-image, and image-to-image apps that use AI to generate images, and apps that create voice and/or video recordings of real-life individuals using AI. However, apps that merely host AI-generated content and are unable to create content using AI, such as social media apps that do not contain AI content generation features, are not covered by this policy.

Developers must implement in-app user reporting or flagging features that allow users to report or flag offensive content. They are also required to conduct “rigorous testing” of their AI models to prevent the generation of restricted content. Google Play will add new application listing features in the future to make the process of submitting generative AI applications to the store more open, transparent, and simplified.

The updated policy follows a White House AI directive issued to tech companies last month, urging industry leaders to take action against the dissemination of deepfakes. Google has also committed to curbing apps that “create, facilitate, monetize, or disseminate image-based sexual abuse”.

Google Generative AI

Full List of Restricted Content

The policy specifically prohibits the generation of restricted content, including:

  • Pornographic Content: AI-generated content that is primarily intended to satisfy sexual needs is prohibited.
  • Violence: Content that encourages harmful behaviour, such as dangerous activities or self-harm, is prohibited.
  • Bullying and Harassment: Content that facilitates bullying and harassment is prohibited.
  • Fraud: Real-person audio or video recordings that facilitate fraud are prohibited.
  • Malicious Code: AI that enables the creation of malicious code is prohibited.
  • Official Documents: AI that generates “official” documents to facilitate dishonest behaviour is prohibited.

Additional Requirements

The policy also requires applications to conduct “rigorous testing” of their AI models to ensure that they do not generate prohibited content. This includes testing for content that is offensive, illegal, or harmful.

Gizchina News of the week


Impact on Developers

The new policy has significant implications for developers who create AI applications. They must ensure that their applications comply with the policy and do not generate prohibited content. Failure to comply can result in the removal of the application from the Google Play store.

Read Also:  Xiaomi Pad 7 / Pro global version gets EEC certification

To implement these changes, developers must integrate user reporting and flagging features into their apps. This allows users to report offensive content without having to exit the app, enabling developers to inform their content filtering and moderation strategies. Also, developers must ensure that their apps adhere to Google Play’s Inappropriate Content policies.

Google Generative AI

The policy updates also impact the way apps handle user data and permissions. For instance, apps requesting broad photo and video permissions must now use a system picker, such as the new Android photo picker, to ensure that users are aware of the data being accessed. Furthermore, the policy limits full-screen notifications to high-priority situations, requiring apps to obtain special permission for full-screen notifications.

Future Developments

Google Play is also planning to add new application listing features in the future to make the process of submitting generative AI applications to the store more open, transparent, and simplified. This will help developers better understand the requirements and guidelines for their applications. In the future, we can expect to see even more stringent requirements for developers of generative AI apps. This may include:

  • Mandatory AI model testing and auditing by independent third parties to verify safety and accuracy
  • Proactive content moderation using advanced AI and human review to identify and remove violative content
  • Strict age restrictions and parental controls for apps that generate content potentially unsuitable for children
  • Transparency requirements for apps to disclose when content is AI-generated to avoid deception
  • Interoperability standards to enable cross-platform reporting and enforcement of content violations

Google and other tech companies will likely collaborate with policymakers, academics, and civil society to develop comprehensive frameworks for responsible AI development and deployment. The goal will be to foster innovation while mitigating risks to user safety, privacy, and well-being.

Conclusion

The updated policy by Google Play is a significant step towards ensuring that AI applications are used responsibly and do not generate harmful or illegal content. The policy provides clear guidelines for developers and helps to maintain a safe and trustworthy environment for users. As generative AI becomes more ubiquitous, the public will demand robust guardrails. Developers who proactively prioritize safety and ethics will be best positioned to succeed in this rapidly evolving landscape. What do you think about the new policy update from Google? Is this the way to go? Will it make AI content safer to digest? Let us know your thoughts in the comment section below

Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.

Source/VIA :
Previous Samsung Extends Live Translate Feature to Third-Party Apps: Enhancing Global Communication
Next Nothing's Sub-Brand CMF Phone (1) Officially Announces a Phone with a "Mysterious Knob"