Does your phone eavesdrop to target ads? A Samsung engineer and Korean regulators weigh in

featured-image

You casually mention needing new shoes. Hours later, you’re served a sneaker ad on Instagram. Coincidence? Maybe. Creepy? Absolutely. Moments like this have fueled one of the most persistent digital-age suspicions: that your smartphone is secretly listening to your conversations to deliver eerily relevant ads. It feels too accurate to be anything else. But is that really what’s happening? According to interviews conducted by The Korea Herald with a current Samsung AI engineer, an information sec

You casually mention needing new shoes. Hours later, you’re served a sneaker ad on Instagram. Coincidence? Maybe.

Creepy? Absolutely. Moments like this have fueled one of the most persistent digital-age suspicions: that your smartphone is secretly listening to your conversations to deliver eerily relevant ads. It feels too accurate to be anything else.



But is that really what’s happening? According to interviews conducted by The Korea Herald with a current Samsung AI engineer, an information security professor and South Korea’s privacy regulators, the answer is: not exactly. There’s no solid evidence that tech companies are covertly recording your private conversations for advertising. And from a technical, legal and reputational standpoint, doing so would be risky, unnecessary — and likely illegal.

Yet your instincts aren’t entirely wrong. Because while your phone probably isn’t recording you, it doesn’t have to. It already knows enough.

You are not being heard — you're being modeled “Even if it feels like the device heard you, it’s likely just a result of combining subtle data points — search history, location, time of day, device activity, even nearby people’s behavior,” said Park Ki-woong, a professor of information security at Sejong University. In some ways, that kind of inference is less transparent — and more disturbing — than direct surveillance, Park suggested. In short, you're not being listened to.

You're being predicted. Apps and ad platforms track your digital breadcrumbs — what you click, where you go, who you talk to and when you do it. These fragmented signals are then processed by complex models that guess what you might be interested in next.

Sometimes, those guesses feel eerily accurate. Park explained that the systems can even infer who your friends are by observing which devices share locations or Wi-Fi networks. And this modeling can work even when location services are off.

Meta’s public privacy policy confirms this approach, stating it uses IP addresses to estimate your location “even if location services is turned off.” “It’s not just about what you say or do,” Park said. “It’s about how machines connect the dots — faster than you can.

” What the law sees — and misses Last November, Korea’s Personal Information Protection Commission fined Meta 21.6 billion won (about $15.2 million) for using behavioral data to infer what the law defines as sensitive information — including users’ religious views, political opinions and sexual orientation — without explicitly obtaining user consent.

“What Meta was fined for was inferring legally sensitive information from user behavior, without first asking for clear consent,” said PIPC spokesperson Seo Jeong-ah. “Now, I personally know the microphone theory is out there — and I understand why people believe it — but we at the PIPC can’t confirm anything related to voice recording.” Seo clarified that while the fine does not prove Meta wasn’t using voice data, it wasn’t necessary.

“What I can tell you is this: Meta already had access to vast behavioral data and extremely advanced inference models. That alone was enough to infer sensitive traits — and that’s exactly what they were penalized for.” According to Seo, these models are essentially “a black box,” making it extremely difficult for outsiders to fully understand how they work.

“That’s why people feel uneasy. These systems are powerful, opaque and mostly invisible.” Another PIPC official, Go Myeong-seok, who was directly involved in a separate Meta enforcement case, told The Korea Herald that if Meta were using voice recordings for profiling or ad targeting, it would be subject to the same regulatory obligations.

And currently, Meta’s privacy policy does not ask for consent to collect ambient voice data —only voice commands used in clearly disclosed contexts, such as smart glasses or in-app voice features. Meta’s policy does mention that when a user activates a voice-enabled assistant — such as saying a command to take a photo — voice interactions and any background sounds during that command are collected, but “to support or improve the Assistant feature.” “This is clearly separate from using voice data to build ad profiles,” Go said.

“And if they were doing that without consent, it would be a violation.” Always-on — but not always recording But what about digital assistants like Siri, Alexa or Google Assistant, which are “always listening” for wake words? This is where many users confuse passive wake-word detection with actual surveillance. According to a current Samsung AI engineer, who helped develop Bixby’s on-device natural language processing systems, these assistants don’t record your conversations by default.

“To detect a wake word like ‘Hi Bixby,’ the device must be listening at a low level — but that doesn’t mean it’s recording or storing everything you say,” the engineer said. “It’s a rolling buffer — just one or two seconds long. If the wake word isn’t detected, that data is discarded.

” Only after hearing the wake word does the system start recording and processing your voice — and only if you’ve granted the necessary permissions. The process is designed to be local, minimal and privacy-compliant. Could a system like this be re-engineered to listen for ad-relevant phrases like “I’m hungry” or “I need a new phone”? Technically, yes.

“You could run continuous speech-to-text, do keyword extraction, then feed it into an ad engine,” the engineer admitted. “But doing that in secret would be a massive legal risk — and likely detected quickly in markets with strict privacy laws, like Korea or the EU.” But again, professor Park emphasized that the real concern might not be hidden microphones — it’s invisible modeling.

“Platforms like Meta and Google probably don’t need to hear your voice to know what you want. They can guess. And often, they guess right,” he stressed.

The real privacy problem That’s why the phone-listening myth, while mostly inaccurate, distracts from a more pressing reality: tech companies can infer a disturbing amount of personal information about you through legal, behind-the-scenes data analysis — and you might never know how or why it was done. “People might be worried about the wrong thing,” Park said. “What we should be asking is: What kinds of predictions are being made about us? What rights do we have to challenge them? And how much control do we really have?” Until those questions are addressed, the suspicion probably won’t fade.

Because whether or not your phone is listening, it already knows more than enough..