Google has started rolling out a new feature in its Messages app that automatically blurs images suspected of containing nudity. The tool, powered by on-device artificial intelligence, is designed to protect users — especially younger ones — from unsolicited explicit content while maintaining user privacy. Blur First, Decide Later The AI-driven feature scans incoming image content directly on the user’s device without sharing data with Google’s servers.
If nudity is detected, the image is blurred, and a warning appears, offering options like “Learn why nude images can be harmful,” “Block this number,” and a choice between “No, don’t view” or “Yes, view.” The feature also includes a post-view option called “Remove preview” that allows users to re-blur the image after seeing it — adding another layer of control. Importantly, this system is built into Android’s SafetyCore framework, ensuring that content never leaves the device.
Tailored Safety for Different Age Groups Google has made sure the feature responds differently based on user age. For users under 18, the sensitive content warning is enabled by default. For supervised children — those whose devices are monitored through Google’s Family Link — the setting is locked in and cannot be disabled.
Unsupervised teens aged 13 to 17 also have the blur feature turned on by default, but they can choose to disable it within the app's settings. For adult users, the feature is off by default and can be enabled manually. Not Just Receiving — Sending Gets a Pause Too In addition to screening received content, Google Messages also places a checkpoint before users send or forward nudity.
If an image flagged as explicit is about to be shared, the app prompts the sender with a confirmation: “Yes, send” or “No, don’t send.” This isn't a full block but a nudge — a deliberate pause aimed at reducing impulsive sharing. Google describes this as a "speed bump" that encourages users to reflect before engaging in potentially risky interactions.
Rollout Still in Early Stages While Google first announced the feature in October last year, and phased rollout began in February, it's still far from widespread. The setting — tucked away under Messages > Protection & Safety > Manage sensitive content warnings — has so far only been spotted on a limited number of beta devices, suggesting broader availability is still on the horizon. For now, the feature only applies to still images and is exclusive to Google Messages.
Other apps would need to integrate with SafetyCore to offer similar protections. With privacy and user control at the heart of this new tool, Google appears to be positioning its Messages app as a safer, more thoughtful space for communication — especially in an era where unsolicited and explicit content is just a tap away. Also read India's Real-Money Gaming Puzzle: Why It's Time For A Smarter, Unified Policy World Book Day: Google Play Turns Page On Tradition With Fun Reads.
Google Messages Now Shields Users From Nude Images With AI Blur Feature

Google has started rolling out a new feature in its Messages app that automatically blurs images suspected of containing nudity. The tool, powered by on-device artificial intelligence, is designed to protect users — especially younger ones — from unsolicited explicit content while maintaining user privacy.Blur First, Decide LaterThe AI-driven feature scans incoming image content directly on the user’s device without sharing data with Google’s servers. If nudity is detected, the image is blurred, and a warning appears, offering options like “Learn why nude images can be harmful,” “Block this number,” and a choice between “No, don’t view” or “Yes, view.”The feature also includes a post-view option called “Remove preview” that allows users to re-blur the image after seeing it — adding another layer of control. Importantly, this system is built into Android’s SafetyCore framework, ensuring that content never leaves the device.Tailored Safety for Different Age GroupsGoogle has made sure the feature responds differently based on user age. For users under 18, the sensitive content warning is enabled by default. For supervised children — those whose devices are monitored through Google’s Family Link — the setting is locked in and cannot be disabled.Unsupervised teens aged 13 to 17 also have the blur feature turned on by default, but they can choose to disable it within the app's settings. For adult users, the feature is off by default and can be enabled manually.Not Just Receiving — Sending Gets a Pause TooIn addition to screening received content, Google Messages also places a checkpoint before users send or forward nudity. If an image flagged as explicit is about to be shared, the app prompts the sender with a confirmation: “Yes, send” or “No, don’t send.” This isn't a full block but a nudge — a deliberate pause aimed at reducing impulsive sharing.Google describes this as a "speed bump" that encourages users to reflect before engaging in potentially risky interactions.Rollout Still in Early StagesWhile Google first announced the feature in October last year, and phased rollout began in February, it's still far from widespread. The setting — tucked away under Messages > Protection & Safety > Manage sensitive content warnings — has so far only been spotted on a limited number of beta devices, suggesting broader availability is still on the horizon.For now, the feature only applies to still images and is exclusive to Google Messages. Other apps would need to integrate with SafetyCore to offer similar protections.With privacy and user control at the heart of this new tool, Google appears to be positioning its Messages app as a safer, more thoughtful space for communication — especially in an era where unsolicited and explicit content is just a tap away.