Recent research reveals that clever use of poetic prompts can sometimes bypass AI safety measures, prompting concerns about potential misuse. Researchers from Icaro Lab, Sapienza University of Rome, and Sant’Anna School of Advanced Studies tested whether poetic language could trick AI models into revealing sensitive or dangerous information. Their findings demonstrate that, under certain conditions,










