Loading

All posts tagged in Deepfakes

  • svg
    Post Image

    As technology continues to evolve rapidly, businesses face an increasing array of cyber threats. From artificial intelligence to quantum computing, new risks are emerging that could threaten data security and operational integrity. Staying ahead of these dangers requires understanding how these technologies develop and how they can be exploited by malicious actors. The Present and

  • svg
    Post Image

    Resemble AI has secured $13 million in a new round of funding to boost its AI-powered deepfake detection tools. The investment increases its total funding to $25 million, with notable participation from firms like Berkeley CalFund, Comcast Ventures, Google’s AI Futures Fund, Sony Ventures, and Okta. This comes at a time when organizations are under

  • svg
    Post Image

    A recent incident in Lawrence, Kansas, highlights the growing dangers of AI-generated voices and their potential to deceive. A woman received a voicemail that sounded exactly like her mother, claiming she was in trouble. Believing it was real, she called 911, prompting police to respond swiftly. However, it was later revealed that the voice was

  • svg
    Post Image

    Netarx, a leader in digital trust and enterprise security, has announced a strategic partnership with People Driven Technology (PDT), a Michigan-based organization focused on delivering measurable technology outcomes. This collaboration aims to strengthen enterprise defenses against the rising threats of AI-driven disinformation and deepfake attacks, ensuring secure and trustworthy communication channels for clients. Combining Advanced

  • svg
    Post Image

    It’s astonishing how a seemingly normal day can suddenly turn alarming with just a phone call. Imagine answering your phone to hear your loved one’s voice, but something feels off. Before you realize it, your stomach tightens with worry. This is the new reality fueled by AI technology—where scammers use advanced voice cloning to imitate

  • svg
    Post Image

    Cyber threats are becoming more advanced and harder to spot. The 2026 Entrust Identity Fraud Report shows a sharp increase in the use of deepfakes and social engineering tricks. Fraudsters are using artificial intelligence to create convincing fake images, videos, and messages, making it tougher for organizations to protect identities and avoid financial losses. Deepfake

  • svg
    Post Image

    Biometric identity verification is evolving fast, especially as AI-generated deepfakes become more sophisticated. A leading provider, iProov, has proven its technology can meet the latest U.S. standards for secure digital identity. Their solutions focus on making sure that the person behind the screen is real and present, not a fake or a spoof. Meeting New

  • svg
    Post Image

    Hungary is facing a new kind of political challenge that could change how we see truth online. A fake video, made with artificial intelligence, has caused a stir in Budapest. The opposition leader, Peter Magyar, is claiming it’s a complete fabrication and has announced plans to file a criminal complaint. This incident highlights how deepfake

  • svg
    Post Image

    YouTube has introduced a new tool to combat the rise of deepfakes. These are videos where AI makes someone’s face or voice look incredibly real, often without permission. The platform’s latest feature aims to help creators identify when their likeness is being used without their consent. It’s a step toward making online videos more trustworthy,

  • svg
    Post Image

    Recently, a simple social media scroll revealed how realistic AI-generated videos have become. Someone posted a clip of a friend speaking fluent Japanese at an airport. The catch? The friend doesn’t speak a word of Japanese. That’s when it became clear: the video was made using AI technology, specifically an app called Sora. This new

svg To Top