Canadians who follow AI health advice face five times more harms

Surveys show Canadians face higher harm from AI health advice and rising ai-powered fraud losses

Canadians who follow AI health advice face five times more harms

Canadians who follow health advice from AI are five times more likely to experience harms than those who do not, while AI‑enabled fraud is already cutting into business profits – a dual risk that lands squarely on health and financial outcomes. 

According to the Canadian Medical Association (CMA), most Canadians (89 percent) now go online for health information because it is faster and more convenient than trying to access care through the health system.  

The CMA says that while only 27 percent of Canadians trust AI to provide accurate health information, about half are using AI tools to diagnose or treat their health issues.  

The same survey finds that Canadians who followed health advice from AI were five times more likely to experience harms than those who did not. 

CMA president Margot Burnell says that “too many Canadians struggle to access health care when they need it.”  

She said this lack of access forces people to rely on dubious information that is already harming them, and she warned of the consequences for patients if governments fail to act now. 

The CMA’s third Health and Media Tracking Survey, conducted by Abacus Data, shows that the increase in false health information online has made 69 percent of Canadians skeptical of any health information they find online, even from sources they think they should trust.  

The CMA also reports that 85 percent of Canadians trust physicians to help them navigate health information, and that 77 percent are concerned about an increase in false health information coming from the United States.  

Nearly all Canadians surveyed believe the responsibility to curb the spread of false health information falls to social media platforms (87 percent) and governments (90 percent). 

CTV News, citing the same CMA survey, reports that nearly nine in 10 Canadians turn to the internet for health information, but many encounter false or misleading claims.  

As AI‑generated images and influencers become more sophisticated, experts warn this problem is likely to grow.  

“It’s really remarkable. They look incredibly real,” said Timothy Caulfield, a professor of law and public health at the University of Alberta, in an interview with CTV News Channel. “And keep in mind, it’s only going to get better. These images are only going to get more realistic.” 

Caulfield told CTV News Channel that research consistently shows people struggle to distinguish authentic content from AI-generated fakes, even when they believe they can.  

“We think we can tell the difference. We can’t,” he said. “And very recent studies tell us that when AI images are included in content, we’re more likely to fall for misinformation.”  

He said this is especially concerning in health care, where misleading claims can influence real‑world decisions, and noted that AI‑generated “influencers” are increasingly styled as wellness coaches, spiritual figures or medical professionals while promoting questionable supplements or treatments

“There’s a real irony,” he said. “It’s a robot trying to look folksy and authentic in order to sell you stuff.” 

His core advice remains: “Go see a licensed health professional.” 

On the corporate side, KPMG Canada reports that AI fraud is quickly emerging as a major threat to Canadian organizations.  

Nearly three‑quarters (72 percent) of surveyed business leaders say they lost between one and five percent of their annual profits to AI‑powered fraud attacks in the past 12 months.  

KPMG says 81 percent of businesses that experienced fraud in the past year faced an AI‑enabled attack, and 72 percent of those organizations were targeted more than once.  

As a result, 94 percent of business leaders say they are concerned about AI‑powered attacks in the year ahead, yet only 26 percent say they have a formal, comprehensive and tested fraud incident response plan that explicitly covers AI‑enabled attacks such as deepfakes and voice clones. 

“AI‑powered fraud is changing the ground rules,” says Myriam Duguay, partner and national leader of Forensic Investigation, Integrity and Dispute Services at KPMG Canada.  

She warns that beyond the financial hit, “the reputational fallout from a fraud attack can be devastating.” 

KPMG notes that the most common AI fraud schemes are AI‑generated phishing emails or chats (60 percent), deepfake or manipulated documents (39 percent) and voice‑clone executive impersonation calls (24 percent).  

In response, more than half (52 percent) of organizations say they are “fighting AI with AI” by using the technology to identify anomalies, authenticate users and detect manipulated content, while most plan to increase budgets for detection technology, employee training and transaction controls.