Title:AI Agents


Authors:Aabhas Nograiya, Saksham Jain, Pratham Sahu, Paras Bhanopiya


Published in: Volume 3 Issue 1 Jan June 2026, Page No.258-261


Keywords:AI Agents, LLM Priming, AI Baiting, Invisible Text Manipulation, DOM Sanitization, CSS-Based Hidden Instructions, Web Parsing Vulnerabilities, Ethical AI Usage.


Abstract:Artificial Intelligence agents are increasingly being used to automate tasks and make independent decisions. However, this research shows that these systems can be influenced through hidden instructions that are not visible to human users. AI agents read the underlying HTML structure of a webpage, which allows attackers to embed invisible text that can redirect the AI’s reasoning. This method, known as LLM Priming or AI Baiting, can subtly change the AI’s output without the user realizing it. To test this, hidden text was inserted into a sample product webpage. The AI consistently recommended the product containing the invisible message, proving that its judgment can be manipulated through non-visible data. To reduce this vulnerability, a practical defense approach called DOM Sanitization with Basic CSS Filtering is proposed, which removes hidden elements before the AI reads the page. This study highlights the need for awareness and responsible implementation when deploying AI agents in real-world systems


Download PDF