News

Latest news on the subject of B2B cyber security >>> PR agencies: Add us to your mailing list - see contact! >>> Book an exclusive PartnerChannel for your news!

How ChatGPT will support cybercriminals
How ChatGPT will support cybercriminals

The OpenAI ChatGPT chatbot proves how artificial intelligence and machine learning can directly determine life and everyday life. Advanced IT users will use such tools for their purposes. And with that, unfortunately, cybercriminals too. OpenAI's ChatGPT AI model is based on unsupervised learning. With this ML approach, an AI model is fed with a large data set of unlabeled data. The vast corpus of material of books, articles and websites is based on pre-2021 sources and has no connections to the current internet. But that is already enough to learn the structures of natural language and to deceptively…

Read more

Fake ChatGPT apps as an attack vector
B2B Cyber ​​Security ShortNews

The hype surrounding ChatGPT is gigantic, and the online app is experiencing a rapid increase in user numbers. So it's no big surprise that hackers also use this hype for their malicious actions. As with the hype phenomenon Pokémon Go, cybercriminals are now using fake versions of ChatGPT to spread malware for Android and Windows. Therefore, consumers should be very careful and avoid any offer that sounds too good to be true. ChatGPT is an online-only tool and is only available at chat.openai.com. Currently there are no mobile or desktop apps for any...

Read more

Is ChatGPT changing the future of IT security?
Is ChatGPT changing the future of IT security?

Open AI's ChatGPT software is making waves. With the help of AI, the chatbot answers a wide variety of questions very eloquently. So it should come as no surprise that, as is always the case with new technologies, criminals are already thinking about how to use these capabilities for their own ends. A joint study by Europol, Unicri and Trend Micro investigated this. According to these results, even better generated social engineering tools such as phishing or BEC could be considered for the Open AI software. Currently, other security researchers have also examined fraudulent emails generated with the currently hyped AI, and…

Read more

Romance scams abuse trust
Romance scams abuse trust

Romance scams are on the rise and Valentine's Day is just one example of when these scams can increase significantly. The reality is scammers are constantly looking for real connections and abuse the currency of trust. Reports from around the world show a similar trend in the rise of love scams resulting in the loss of millions of dollars. The Federal Trade Commission reports that individuals have lost a staggering $1,3 billion to love scams in the last five years. There is some positive news as the National Police are involved in one of the largest...

Read more

Russian hackers want to use ChatGPT for attacks

Customer access is required for full use of OpenAI's AI system ChatGPT. Russian hackers are currently looking for ways to bypass this access to use ChatGPT to achieve their malicious goals. But that's what a lot of hackers want right now. Conversation notes from the dark web. Check Point Research (CPR) is monitoring attempts by Russian hackers to bypass OpenAI restrictions to use ChatGPT for malicious purposes. In underground forums, hackers are discussing how to bypass controls of IP addresses, payment cards and phone numbers - all necessary to access ChatGPT from Russia...

Read more

KI ChatGPT as cyber criminals
KI ChatGPT as cyber criminals

Since the furious start of ChatGPT, not only millions of people have been using artificial intelligence to get travel tips or to have scientific contexts explained. Security researchers and cyber criminals are also trying to figure out how the tool can be used for cyber attacks. Actually, the software should not recommend criminal acts. White hat hacker Kody Kinzie tried out how this still works and where the limits of intelligence lie. Illegal and unethical At the beginning there is a simple question: "How can I hack a certain company?" The chatbot seems to be trained for requests of this kind, because in...

Read more

Chatbots: Only machines help, machines go
Chatbots: Only machines help, machines go

Chatbots like ChatGPT are on the rise: artificial intelligence can cope with natural ignorance. Increasingly, intelligent machines are needed to detect when other machines are trying to deceive users. A comment from Chester Wisniewski, Cybersecurity Expert at Sophos. The chatbot ChatGPT, which is based on artificial intelligence, is making headlines worldwide - and in addition to reports in the stock market and copyright environment, IT security is also the focus of discussions. Because the recently realized, broader availability of the tool, despite all the security efforts of the manufacturer, brings with it new challenges when it comes to phishing bait or…

Read more

ChatGPT: AI-designed malicious emails and code
B2B Cyber ​​Security ShortNews

Check Point's security research department warns of hackers who could use OpenAI's ChatGPT and Codex to launch targeted and efficient cyberattacks. The intelligence can create phishing emails and generates dangerous VBA code for Excel files. In experimental correspondence, Check Point Research (CPR) tested whether the ChatBot could be used to create malicious code to initiate cyber attacks. ChatGPT (Generative Pre-trained Transformer) is a free-to-use AI chatbot that can provide its users with contextual answers based on data found on the internet. Codex, on the other hand, is an OpenAI…

Read more