“Reflected XSS”-like attack on chatbot AI

This is a theoretical question. I just watched a certain video in which the author apparently unmasks a chatbot AI that is likely trying to harvest data and spread influence in a cult-like manner on a given social network. The video is hosted on YouTube: Who or What is Stephanie Lawson Stevens?

What really caught my attention was that at some point one of the investigators, who has an IT background, starts asking some questions to the user (who up to that point was just a person, not a supposed AI) and then infers from the answers that that might be an actual chatbot. The video I linked above shows the process.

I’m no specialist in AI at all but it made me wonder: The AI reads information coming from a text input (the chatbox). Would it be theoretically possible for a user to craft a certain coded message through a chat that, once read by the AI, acts as if a reflected XSS attack was made on the AI (the same way that people can reflect code that is executed by a victim’s browser)?

In this scenario, if the programming of the AI itself doesn’t include the necessary precautions in sanitizing the text input, the AI might read a snippet of code from the chatbox and end up executing it.

Is this an actual possibility?