How Deepfake Videos Are Eroding Trust in Health Advice
Technology is advancing so quickly that it’s becoming easier for scammers to deceive us. Recently, scammers have started using AI-generated deepfake videos of the late Dr. Michael Mosley, a well-known health expert, to promote fake health products. These videos appear on social media and show Mosley giving advice on topics like menopause and inflammation— but he never actually endorsed these claims.
The Rise of Deepfake Health Scams
Deepfake videos are created by stitching together clips from familiar podcasts and appearances of Mosley, mimicking his voice, facial expressions, and mannerisms. They look very real and convincing, making viewers think they’re watching the actual doctor. But in reality, Mosley passed away last year, and these videos are entirely manufactured. Experts warn that as AI technology improves, it will become even harder to tell real from fake just by looking.
The Danger of Misinformation
This isn’t just about trickery. These fake videos push false health claims—like beetroot gummies curing aneurysms or herbs balancing hormones—that could be dangerous if taken seriously. A dietitian pointed out that such exaggerated claims confuse people and undermine proper nutrition advice. Relying on these AI videos can lead to poor health choices, especially when they promote supplements or remedies that have no scientific backing.
Regulatory and Platform Challenges
Health authorities like the UK’s MHRA are investigating these claims. Meanwhile, social media platforms are under pressure to control the spread of fake content. Despite rules against deceptive videos, platforms like Meta struggle to keep up with the volume and speed at which these deepfakes go viral. The UK’s Online Safety Act now requires platforms to actively remove illegal content, including scams and impersonations. But even with enforcement efforts, fake videos often reappear quickly after being taken down.
Widespread Impersonation and Growing Concerns
This issue isn’t isolated. A recent CBS report found dozens of deepfake videos impersonating real doctors giving medical advice worldwide. In one case, a doctor discovered a deepfake promoting a product he never endorsed. The fake video looked so authentic that viewers commented praising the doctor, completely fooled by the deception. This shows how convincing these AI tricks are and how they can spread misinformation to millions of people.
What’s most concerning is that many people trust these AI-generated videos because they look and sound real. We’ve always relied on trusted voices, like doctors and health experts, for advice. When bad actors use AI to create convincing fakes, it chips away at that trust and makes it harder for people to know what’s true. The challenge isn’t just about catching deepfakes but also about restoring confidence in reliable sources of health information.
Platforms need better tools to identify and label AI-generated content. Users should also be more cautious before sharing or believing what they see online. At the end of the day, the fight against misinformation requires both technological solutions and a more skeptical, discerning audience. Only then can we protect ourselves from falling for fake health advice that could do more harm than good.















What do you think?
It is nice to know your opinion. Leave a comment.