Bing Chatbot Could Be a Convincing Scammer

Click the tags above for all related articles

Security researchers have noticed that by using text prompts embedded in web pages, hackers can force Bing’s AI chatbot to ask for personal information from users, turning the bot into a convincing scammer.

Let me remind you that we also recently wrote that Bing’s Built-In AI Chatbot Misinforms Users and Sometimes Goes Crazy, and also that ChatGPT Became a Source of Phishing.

Also it became known that Microsoft to Limit Chatbot Bing to 50 Messages a Day.

According to the researchers, hackers can place a prompt for a bot on a web page in zero font, and when someone asks the chatbot a question that makes it “read” the page, the hidden prompt will be activated.

The researchers call this attack an indirect prompt injection and cite the compromise of Albert Einstein’s Wikipedia page as an example. When a user asks a chatbot about Einstein, it can “read” this page and become a victim of a prompt injection, which, for example, will try to extract personal information from the user.

Click here for more information

Share
Scroll to Top