On Facial Recognition of Toddlers in Preschools

featured-image

The use of facial recognition for toddlers in preschools raises concerns about their personal data sharing from children's digital documentation on online platforms. The post On Facial Recognition of Toddlers in Preschools appeared first on MEDIANAMA.

You remember that old saying that “if something is available for free, you’re the product”? In the age of AI, even if you’re paying for something, you’re still the product.A few weeks ago, I got a cheerful but worrying email from an AI app on behalf of a great preschool that my child was about to join. The app, it seems, serves as a documentation platform to allow us to keep track of our child’s activities in school, including viewing photos and notifications and getting notifications from the school.

What got me concerned was a line in the email that said that we had already been added to the portal. The lack of agencyWhen we first signed the admission form, there was a mandatory consent clause that caught my attention. It said:Your child’s digital documentation would be used for educational and informational purposes on online platforms, teacher training, research, and documentation of teaching practices.



We assure the use of your child’s photos, videos, and other media for pedagogical, teaching, training, and information documentation with integrity and sensitivity.Given that my child is a toddler, and lacks the agency to determine where his data is shared and processed, we’ve been careful as guardians about sharing his photos: we typically send photos and videos with a limited one-time access on WhatsApp to relatives. None of the photos online capture his face.

I don’t want a situation where, 10 years down the line, I’m told that I didn’t guard his privacy when he didn’t have the agency to protect himself. This differs from my position on allowing children access to technology and the Internet: I’m acutely aware of dopamine addiction, the risk of myopia, and we want to avoid devices as long as possible. As a parent, I need to exercise my guardianship, and gradually enable agency, which is why I’m also opposed to the remarkably ignorant and extreme idea of verifiable parental consent in the Digital Personal Data Protection Act, which disables the ability for parents to allow their child agency gradually and decision making.

The app in question, I noticed, is pitched as an AI app, and has facial recognition. For toddlers.The consent notice made no mention of AI and facial recognition, but because I spend most of my days at the intersection of technology, business, and public policy, I did something that probably no one has done to a preschool: I sent them a withdrawal of consent notice (which, ironically, I drafted using the help of AI).

I wrote:“I am writing to formally withdraw my consent regarding the use of my child, [Child’s Full Name], currently enrolled at [School], in any publicly accessible media, online platforms, or AI-powered platforms, as per the Consent for Media Usage clause in the admission agreement.”“..

.we do not consent to the public dissemination of our child’s digital documentation, including but not limited to photos, videos, and other media, on any online platforms, social media, AI-powered platforms, or platforms that use AI for processing or dissemination. This withdrawal of consent extends to any potential data scraping, AI training, or profiling use cases.

We are, however, comfortable with the internal use of our child’s digital documentation strictly within the school premises, the internal school app for [School Name] parents and teachers, and for internal teacher training and documentation purposes.”In public-policy lingo, this is called “Purpose Limitation”, i.e.

, you limit the purposes for which you allow the data to be used. Even though the withdrawal of consent was acknowledged, we still got this email from the AI app: the school probably didn’t understand what the withdrawal of consent meant, and this points towards a clear disconnect between policy and implementation. How many people actually send a withdrawal of consent notice anyway, let alone, a withdrawal of consent notice, to a pre-school? What I did here was an anomaly.

Schools most likely see this technology as a way to reassure and engage parents — not as a surveillance tool. They probably don’t realise, just as most people who give their photos to ChatGPT’s Ghibli generator, that this is potentially training data for an AI. They’re just trying to provide parents with a mechanism to get inputs on their child (in a world with helicopter parenting and the need to constantly observe the child), and having an AI app probably makes them appear more modern to well-heeled parents, apart from reducing the work they need to do to reduce documentation.

I don’t really blame the school here: all of these things—the concerns regarding AI and Privacy, processing of children’s data, and consent (and its withdrawal) are relatively new, and we haven’t done enough to diffuse understanding about them. Most would have taken a take-it-and-leave-it approach. In conversations with the school that have followed, they have been attentive, understanding, and wanting to do the right thing.

I was worried about a take-it-or-leave-it approach, which thankfully didn’t happen.However, at the time that we received the sign-up link, I felt a distinct lack of agency, and that the nature of consent is such that it is procedural and often involuntary (Aadhaar and the Apaar ID are examples), especially given the limited alternatives (and hence leverage) available to parents. We invested a lot of time and effort in selecting a preschool for our child, so there’s a significant sunk-cost.

Admission deadlines were upon us, so the lack of an adequate substitute, and hence fewer choices, coupled with time-pressure, weighed us down. Additionally, if every preschool and school starts using AI services, what choice do parents have, especially in a city like Delhi, where there’s crazy competition for admissions? I don’t want to have to make these choices, and it’s somewhat debilitating to think that this is only just the beginning. When my child was born, I had to argue with the hospital to avoid an Aadhaar linkage in his birth certificate, because I’m opposed to anyone being tracked via forced consent.

As I mentioned earlier, he doesn’t have agency.“What all will you fight?” a very privacy-minded lawyer friend, whose child had been to the same preschool, asked me, when I sought her advice. After a while, especially once the child started going to the preschool independently, getting photos and videos of the activities that the child engaged in was useful for her, even though she didn’t sign up for the app.

She added, for good measure, that she credits this particular preschool for the excellent development of her child.After a while, she said, you just focus on minimising risk, and controlling what you can control. An example she gave was of parents sharing group photos of children on Instagram.

“You can probably control it in a small group, but at large parties, where there are group photos, or where people are uploading photos of their children with mine also in the same frame, all I can hope for is that they don’t tag me or name my child on Instagram.” In my case, we’ve asked relatives to remove photos of our child from social media, and they’ve obliged, without complaint. My friend has held out on getting an APAAR ID for her child, which the government has said isn’t mandatory, but schools are still being pushed to enroll students for it.

They send notice after notice, nudging parents repeatedly to sign up, to fulfill the vision that those who created Aadhaar had, of tracking people from the cradle to the grave. It’s no surprise that this post was among the most read on MediaNama last year. What we’re seeing here, with the APAAR ID, with live streaming from CCTVs in classrooms, and with apps like these, is the normalisation of surveillance.

How long can we keep this up? Why does everything have to be a battle? I’ve already been warned that schools in Delhi won’t allow admission without an Aadhaar for the child, even though it’s not mandatory.How does India’s Data Protection Act apply here?Withdrawal of consent for an AI app? The application’s page on the Google Play Store says very clearly, that it uses facial recognition on children, allowing the school to, among other things:“Bulk upload classroom photos and leverage AI for quick facial recognition and auto-captioning, saving time while organizing memories and milestones.”A screenshot on the play store indicates that the app clearly takes a Google Photos like approach to captioning photos: above the photo gallery are thumbnails with faces of children that one can probably click on to view photos featuring that child.

It has thus collected and processed childrens data in a manner that it can identify them, and hence tag them in bulk uploads of photos. All of this personalisation comes at the cost of individual autonomy, and in this case, at the earliest stage of life. My worry: each child can be a data point, and every new child means more data for the app.

Thankfully, in conversations with the founder of the app (the school put us in touch), and I cross-checked this with my friend Chaitanya C (CTO of Ozonetel), I learnt that they’re merely doing this based on a confidence score of matching the photo we provided with the photos they upload and are not training a Large Language Model (LLM). How exactly does the withdrawal of consent and a demand for data erasure work with an LLM? AI typically trains on the data provided to it, identifies patterns, so as to recognise information. Deleting surface-level access (like removing media from the app) does not equate to deletion from model training data, which could be replicated across systems.

LLMs are, so far, not able to delete personal data once tokenised because of the exorbitant cost of retraining. The permanence of AI data and persistent identifiers built into AI means that once processed, he is likely to be tagged for life: aspects of his behaviour now may impact how he is perceived in an automated decision-making scenario by an LLM in the future. The absence of granular, revocable controls in such AI systems compounds the power asymmetry between AI apps and parents.

Thankfully, they’re not using LLMs, and in addition, the school and the app have told us that with the withdrawal of consent, they can delete individual photos of our child. But what about group photos? The childs photo, I was told, can be untagged, and they’re working on a mechanism of blurring the childs photo if they don’t have consent. Maybe Instagram should work on something like this, too, where if an unidentified childs photo is uploaded without consent, it is blurred by default.

If nothing else, the need for consent will mean that people will stop posting their childs photo on social media without the child having an agency to decline a violation of their privacy. Who is accountable for my childs data? This is a strange one..

. as per India’s flawed Digital Personal Data Protection Act, the ultimate responsibility lies with the data fiduciary. In the case of this app, I can’t quite figure out who the data fiduciary is.

Logically, the responsibility for my childs data lies with the school because they’re collecting it. As far as I, a guardian, am concerned, they should be the data fiduciary. However, in order to access the app and view photographs of my child, I’ll have to sign in and abide by the app’s privacy policy, terms of use and cookie policy, it says.

This means there’s a direct contractual arrangement between parents and the app, and they become a data fiduciary, and they’re no longer just a data processor for the school, under law. However, the app’s privacy policy also states that“We may use Your information to evaluate or conduct a merger, divestiture, restructuring, reorganization, dissolution, or other sale or transfer of some or all of Our assets, whether as a going concern or as part of bankruptcy, liquidation, or similar proceeding, in which Personal Data held by Us about our Service users is among the assets transferred.”That basically meant that the app doesn’t merely operate as a data processor, but claims ownership to the data of children (assets) that it collects.

This sounded like a classic B2B2C model like so many apps, where the school, and not the parent, is the primary customer for the app, but they can still claim rights over the data that parents never directly entrusted them with. What caught my attention was that the Privacy Policy stated that “This Privacy Policy has been created with the help of the Privacy Policy Generator”, and pointed to a domain that is no longer live. There were no terms and conditions for the app, despite there being a link for it.

This meant that the apps founders hadn’t paid much attention to the legal aspects pertaining to the app. The founder acknowledged that this indeed was the case, when we spoke. I was shown a draft privacy policy, which appeared to address these issues, which we expect will go live this week.

What about data security? One key challenge emerging with AI is the usage of childrens data in deep-fake child pornography. The app store listing suggested that the data for this app isn’t encrypted in transit. The founder said that it is, but mentioned that it isn’t encrypted at rest yet.

With Childrens data, that is really worrying, and this is something he said they’re working on. What about behavioural monitoring? It’s also worth noting that India’s Digital Personal Data Protection Act (DPDPA) mentions that platforms cannot carry out tracking and behavioural monitoring of children. The draft rules suggest that educational institutions can track and monitor the behaviour of children when it comes to educational activities, and in the interest of the safety of a child enrolled with them.

Educational institutions here are defined as “an institution of learning that imparts education, including vocational education”. Thus, while the preschool may probably be allowed to monitor and track behavioural data, the platform can only do so on behalf of the preschool as a data processor, not as a data fiduciary. It could thus be in violation of the DPDPA once the rules are finalised.

Children’s faces and behavioural trails should be treated as highly sensitive personal data, not benign media, and it’s a failure of India’s Digital Personal Data Protection Act that it doesn’t have classifications for sensitive personal data, which should be afforded greater protection under law. The app’s founder acknowledged this, and they’re reworking their privacy policy to be a data processor rather than a data fiduciary. What I’m particularly thankful for here is the understanding of the school and the app developer, and I’ve gone from feeling helpless to being hopeful.

Our laws might not be of much help, but we need more empathy towards parents and children from our educational system. It’s also clear that we do need a separate law that engages with how children use the Internet: not rooted in the paternalism of the Digital Personal Data Protection Bill or Australia’s ban on social media for children, but one that provides both parents and children with agency, and acknowledges that we have to accommodate an evolution of agency of the child. We have to ensure that we, including children, should not become just feeders for AI and have little or no leverage in the matter concerning our data online.

PS. If you’ve read this far, what do you think? Let me know at [email protected], and also if I can add your views to this post for others to read.

Feel free to counter me, offer solutions, or share your story. It might be useful for someone and help a school change their approach. Who knows? This is how the Internet works.

It’s wonderful. Something I wrote once apparently helped a government change their plans.Also Read:Government Claims APAAR ID is Voluntary, But Parents Face Pressure to ConsentChallenges With “Best Interests Of The Child” Principle In Data Protection: Notes from Rightscon 2025Can APAAR ID Be Used to Verify Parent-Child Relationships Under India’s Data Protection Law? #NAMAThe post On Facial Recognition of Toddlers in Preschools appeared first on MEDIANAMA.

.