In the past year, Microsoft says it blocked more than $4 billion in fraud attempts, as cybercriminals increasingly turn to AI-powered tools to run sophisticated scams. From impersonating tech support staff to creating convincing fake e-commerce websites, fraudsters are leveraging AI to make their attacks faster, cheaper, and harder to detect. Microsoft’s latest Cyber Signals report outlines how the company is using machine learning and AI-driven tools to stay a step ahead of these bad actors.
AI is making scams smarter—and scarier Microsoft says it thwarted 1.6 million bot-based signups per hour and rejected nearly 49,000 fake partnership attempts between April 2024 and April 2025. And it’s not just spam—the nature of online scams is changing.
“Cybercrime is a trillion-dollar problem,” said Kelly Bissell, Corporate Vice-President of Anti-Fraud and Product Abuse at Microsoft Security. “AI now gives us the ability to detect and close the gap of exposure much faster.” With AI, scammers can scrape company data, generate fake product reviews, clone voices, and even create entire businesses from scratch—with logos, websites, and phony testimonials.
E-commerce scams are getting easier to execute, and job seekers are becoming new targets. AI-generated interviews and automated phishing emails are being used to trick candidates into sharing personal data. While these attacks are happening worldwide, Microsoft’s anti-fraud team has tracked significant activity to China and Germany—countries with large digital economies and e-commerce footprints.
Fake storefronts, customer service chatbots, and deepfake influencers are being used to push counterfeit goods or non-existent services. AI-generated chatbots have also been deployed to stall chargebacks and fool customers into believing they’re dealing with legitimate sellers. Fake tech support and job scams on the rise Among the fastest-growing threats are fraudulent e-commerce websites.
AI allows scammers to spin up professional-looking stores within minutes—far faster than before—using auto-generated product descriptions, reviews, and even customer service chatbots to manipulate victims and stall chargebacks. Employment fraud is also evolving. Scammers are impersonating recruiters and companies on job platforms using AI-generated profiles and interview responses.
Microsoft advises job platforms to implement multi-factor authentication for employers and adopt deepfake detection tools to counter this trend. On the tech support front, Microsoft’s security team recently uncovered a campaign by the cybercriminal group Storm-1811, which exploited the legitimate Windows Quick Assist tool to gain remote access to victims’ devices through social engineering. Though AI wasn’t used in these attacks, the campaign highlights how threat actors continue to exploit user trust.
Fighting back with machine learning and design Microsoft says it's working to embed fraud resistance directly into product development. Under its Secure Future Initiative, product teams must now assess and implement fraud controls as part of the design process. Some of the company’s defences include: Microsoft Edge now features domain impersonation protection and a scareware blocker.
Quick Assist alerts users with warning messages before allowing remote access. Remote Help, designed for enterprise environments, adds more secure alternatives for IT support. Digital Fingerprinting, an AI-driven tool that detects and blocks suspicious Quick Assist sessions, now stops over 4,400 such attempts daily.
Microsoft’s fraud protection systems also extend to LinkedIn, where it is rolling out fake job detection algorithms to flag suspect listings. Microsoft is also partnering with the Global Anti-Scam Alliance (GASA), which unites governments, law enforcement, and tech companies to share data and strategies. What you can do to stay safe Microsoft urged consumers to remain vigilant against increasingly sophisticated scams.
Some key red flags include: Unsolicited tech support requests, especially via phone or email. Job offers that require payment or originate from free email domains. E-commerce sites with deals that seem too good to be true.
Deepfake interviews or chatbots with unnatural responses..
Technology
Microsoft blocks $4 billion in fraud as AI scams rise globally

Microsoft blocked over $4 billion in fraud attempts last year. Cybercriminals are using AI for sophisticated scams, and this is how Microsoft is protecting users. Read on to find out how you can too.