T he internet is drowning in bots. They already make up about 50% of all internet traffic — a number that’s poised to skyrocket to 90 % by 2030 as people start using so-called AI agents to do mundane internet tasks on their behalf, like open accounts, shop for shoes, buy tickets and order food. It’s a huge problem for companies.
Online retailers don’t want bots to scoop up coveted products faster than any human could, banks need to make sure fraudsters aren’t able to take over accounts and social networks want to keep out impersonators. Nor do they want to prevent the growing number of good bots from doing what their owners intend. Throw artificial intelligence into the mix, and it gets worse.
Old-school bot detection techniques like CAPTCHAs— where users are asked to select squares that contain a motorcycle or decipher a scrambled word to prove they’re humans– are simply no match for sophisticated models. And AI has also made it magnitudes easier to mimic a person’s voice, spoof their face and create fake IDs , according to Rick Song, cofounder and CEO of identity verification platform Persona. “Maybe this constant distinguishing of ‘is this a bot or not,’ is kind of a pointless distinction that may not make sense anymore,” Song, 34, told Forbes .
“The real question is just who's behind the AI and what’s their intent.” Making sure that a user is who they say they are is at the core of San Francisco-based Persona, which offers ID authentication software to 3,000 companies including OpenAI, LinkedIn, Etsy, Reddit, DoorDash and Robinhood. It uses a blend of methods to verify a person’s identity, including asking them to upload a photo of their government ID, take a selfie or record a video while turning their head or blinking, or scan the NFC chip embedded in a passport.
In case a person poses significant risk, they may be asked to take a “liveness test”, where they are asked to hold up their ID and move their face to prove they are alive and are the real owners of their ID. The company’s machine learning models can also pick up risk signals such as the network a person’s using, how far they are from the location on their ID, the way they’re interacting with a device and their online presence. “Even today, a lot of bot activity is very mechanical in nature,” Song says.
“A lot of bots tend to use copy paste way more, or there's more hesitations between inputs or just a very rhythmic kind of behavior.” Founded in 2018, Persona announced on Wednesday it has raised $200 million in Series D funding led by Ribbit Capital and Founders Fund along with participation from existing investors Coatue, Index Ventures, First Round Capital and Bond. Song said his company, now valued at $2 billion and with $417 million in total funding, signed $100 million worth of annual contracts last year.
Persona’s success stems from building customized verification processes, called “flows,” that cater to its customers’ specific needs and are adjusted based on the precise use case and risk level a person presents. For instance, Persona creates a different flow for a person trying to verify their age to order alcohol on a food delivery app versus for someone applying for a loan. “There's no one size fits all for identity,“ Song said.
“Companies treated identity as a silver bullet. But what I'd seen was that there wasn't a single way to solve all this.” OpenAI uses Persona to screen hundreds of millions of users across 225 countries signing up for ChatGPT and its API, ensuring that people on international watchlists and sanction lists don’t slip through and use its models in harmful and illegal ways.
People who aren’t flagged as risky users don’t need to go through a more extensive verification process or have their accounts manually reviewed, reducing friction during signups. Online learning platform Coursera turned to Persona to verify its 168 million users in 200 countries based on the class they’re taking, with someone accessing a university-accredited course vetted differently than a casual learner to ensure it’s a smooth onboarding process. DoorDash started using Persona during the pandemic to run background checks on the deluge of delivery workers coming to its platform, requiring them to submit selfies and match them against a government ID.
But while bots and fraudsters have always been part of the internet, AI presents a staggering new challenge. U.S.
companies lose anywhere from $18 billion to $31 billion each year due to AI-based attacks, according to a report by cybersecurity firm Imperva. Global losses from bot attacks, where scammers run up server costs, range from $68 billion to $116 billion. One way bots attack businesses is by creating fake accounts that can cash in on incentives like referral credit, discounts and promotions.
People may decide to use bots or AI-generated content for valid reasons such as having a language barrier or a disability. Good bots made up 14% of automated traffic in 2024, while malicious bots were responsible for 37% of internet activity, according to the Imperva study. In scenarios where the bot usage can be traced back to a human and their reasons for using AI can be verified, “we want to make sure that we're not taking a sledgehammer approach,” said a senior director of regulatory affairs at one company that uses Persona to verify new users’ identities.
Outsourcing ID verification to Persona tackles multiple challenges at once. One company that struggles with fake profiles on its platform opted for Persona to verify real people because doing it in-house would require expensive engineering resources. “We just want somebody to solve the problem for us,” an engineer at the company told Forbes .
Another customer said they chose to use Persona because they didn’t want to store people’s personally identifiable information. Persona stores the data it collects from its customers’ users on AWS and GCP servers, but that data is owned and managed by the companies themselves. Dealing with troves of sensitive data comes with its own issues: In October 2024, Persona faced a class action lawsuit alleging that it collected some Illinois-based users’ personally identifiable information from driver’s licenses and selfies and used it to train its AI models.
The lawsuit was voluntarily dismissed by the plaintiffs. A similar lawsuit filed in February 2024 alleged that the company collected and stored biometric data such as facial geometric scans of DoorDash delivery drivers in Illinois without their consent. Persona spokesperson Evelyn Ju said the claims are baseless and that the company has “always taken a privacy and compliance-first approach.
” Persona is a newer player in the identity verification market, said Akif Khan, a VP analyst at Gartner. It’s up against firms like airport biometric identity verification platform Clear Secure, which went public in 2021 and recorded $770 million in revenue in 2024, and Sunnyvale-based Jumio, which provides AI-based tools for ID verification and claims to have processed more than $1 billion in transactions. Then there’s even newer companies like Sam Altman’s Worldcoin , which uses a spherical biometric device dubbed an “orb” to scan people’s irises in exchange for crypto tokens—a rather bizarre way to prove a person is real and human.
Khan noted, though, that businesses are wary of the increasing threat of deepfakes, which could translate to more business opportunities for Persona given its use of online risk signals, he said. Persona CEO Rick Song (left) and CTO Charles Yeh (right) started the company after Song realized the need to use software and automation to verify millions of users signing up to platforms. Back when Song started Persona in 2018, identity verification was mostly done by human teams based in Eastern Europe, said Thomas Laffont, cofounder of tech investment firm Coatue Management who wrote a $750,000 check in the company’s seed round that year and has invested in every round since.
Song’s idea was to use software instead. “That enabled them to be priced more competitively and to allow customers to use the product more. Because if every time a human was in the loop, it's kind of a bottleneck,” Laffont tells Forbes .
Song had previously spent five years working on identity fraud solutions as an engineer at point-of-sale software giant Square, which in the early ‘10s launched products like digital payment app Cash App and loan provider Square Capital—both of which required different ways to verify someone. Song realized that a flexible, automated software solution could be customized to fit a company’s verification needs. He teamed up with then-roommate Charles Yeh, now Persona’s CTO, to build it.
“The fundamental challenge for identity online is that it's this uncertain thing and it's used for all sorts of things from compliance to trust and safety, from fraud prevention to security,” Song said. “Companies treated identity as a silver bullet. But what I'd seen was that there wasn't a single way to solve all this.
” AI agents will require yet another solution. Song said he envisions that companies building agents will need to register the bots so that it’s easier to spot them. As for humans, Persona hopes to build identity profiles for each person using the hundreds of activities they do online over time, like ordering food or scrolling social media, so Persona can more quickly verify they’re human.
Song said these profiles would be tamper resistant and reusable so that they can be submitted on any site during the verification process, which he thinks people will want to use despite the obvious privacy tradeoffs. “Today you're disclosing more and more about yourself. You're just giving up so much information,” Song said.
“Our dream is this idea of a self-sovereign, personally-owned portable identity.”.
Technology
AI Is Making The Internet’s Bot Problem Worse. This $2 Billion Startup Is On The Front Lines

Persona helps companies like OpenAI, LinkedIn and Reddit verify the identities of millions of users at a time when AI agents have made it increasingly difficult to do so. Now, it has $200 million in fresh funding from top VC firms.