Jailbreaking AI-based chatbots
The cybersecurity company behind the exposure of WormGPT has published a blog post. This provides information about the strategies used by cybercriminals who “jailbreak” popular AI chatbots like ChatGPT. This refers to tactics that circumvent the security limits that companies impose on their chatbots. SlashNext researchers have found that cybercriminals don't just share their successful jailbreaks on discussion forums to make them accessible to others. Instead, developers are also promoting AI bots that can be used for criminal purposes. They claim these are custom language models (LLMs). SlashNext has confirmed that this is in…