LLMs easily exploited using run-on sentences, bad grammar, image scaling
A series of vulnerabilities recently revealed by several research labs indicate that, despite rigorous training, high benchmark scoring, and claims that artificial general intelligence (AGI) is right around the corner, large language models (LLMs) are still quite naïve and easily confused in situations where human common sense and healthy suspicion would typically prevail.
For example, new research has revealed that LLMs can be easily persuaded to reveal sensitive information by using run-on sentences and lack of punctuation in prompts, like this: The trick is to give a really long set of instructions without punctuation or most especially not a period or full stop that might imply the end of a sentence because by this point in the text the AI safety rules and other governance systems have lost their way and given up
Models are also easily tricked by images containing embedded messages that are completely unnoticed by human eyes.
Original Link:https://www.csoonline.com/article/4046511/llms-easily-exploited-using-run-on-sentences-bad-grammar-image-scaling.html
Originally Posted: Wed, 27 Aug 2025 03:25:02 +0000
What do you think?
It is nice to know your opinion. Leave a comment.