Loading

All posts tagged in AI Safety

  • svg
    Post Image

    As organizations race to adopt generative AI (genAI) technologies, many are facing unexpected financial and operational challenges due to failed projects. These failures often leave behind a trail of problematic code, unused applications, and security vulnerabilities—issues that are not always immediately visible to IT leadership. Industry experts warn that the fallout from abandoned genAI initiatives

  • svg
    Post Image

    Recent research has raised concerns about the safety and robustness of popular AI chatbots, including OpenAI’s ChatGPT and Google’s Gemini. Despite ongoing safety measures, these models can still be manipulated into generating restricted or harmful responses more frequently than intended. The study highlights that cleverly crafted prompts, such as poetic verses, can bypass existing safeguards,

  • svg
    Post Image

    As businesses accelerate their digital transformation to foster growth and meet evolving customer expectations, cybersecurity must adapt accordingly. While technological innovations like AI offer significant benefits, they also expand potential attack surfaces, demanding a proactive and strategic security approach. Ensuring robust defenses in this rapidly changing landscape is more critical than ever. The Growing Threat

  • svg
    Post Image

    Security researchers have issued caution to app developers regarding potential vulnerabilities in Google’s newly released Antigravity tool for creating artificial intelligence agents. Despite being available for less than two weeks, the platform has already prompted updates to its known issues page following the discovery of security concerns. Discovered Vulnerabilities and Google’s Response According to a

  • svg
    Post Image

    Recent developments in the European Union’s approach to online communication monitoring have raised concerns among data privacy advocates. While the EU has announced it is abandoning its initial plans to break end-to-end encryption in messaging apps, experts warn that this shift may not fully address underlying risks for organizations operating within Europe and beyond. EU

  • svg
    Post Image

    Netarx, a leader in digital trust and enterprise security, has announced a strategic partnership with People Driven Technology (PDT), a Michigan-based organization focused on delivering measurable technology outcomes. This collaboration aims to strengthen enterprise defenses against the rising threats of AI-driven disinformation and deepfake attacks, ensuring secure and trustworthy communication channels for clients. Combining Advanced

  • svg
    Post Image

    It’s astonishing how a seemingly normal day can suddenly turn alarming with just a phone call. Imagine answering your phone to hear your loved one’s voice, but something feels off. Before you realize it, your stomach tightens with worry. This is the new reality fueled by AI technology—where scammers use advanced voice cloning to imitate

  • svg
    Post Image

    Recent breakthroughs in adversarial learning are transforming AI security, enabling systems to defend themselves in real-time rather than relying on static measures. As AI-driven cyber threats evolve—leveraging reinforcement learning and large language models—traditional defenses struggle to keep pace. This creates significant operational and governance challenges for enterprises, as attackers employ multi-step reasoning and automated code

  • svg
    Post Image

    As AI agents become increasingly autonomous in performing tasks for users, ensuring they respect privacy norms is more important than ever. Central to this effort is the concept of contextual integrity, which views privacy as the appropriateness of information flow within specific social settings. For AI systems, this means sharing only relevant information based on

  • svg
    Post Image

    Cyber threats are becoming more advanced and harder to spot. The 2026 Entrust Identity Fraud Report shows a sharp increase in the use of deepfakes and social engineering tricks. Fraudsters are using artificial intelligence to create convincing fake images, videos, and messages, making it tougher for organizations to protect identities and avoid financial losses. Deepfake

svg To Top