News

Latest news about B2B cyber security >>> PR agencies: add us to your mailing list - see contact! >>> Book an exclusive PartnerChannel for your news!

Romance scams abuse trust
Romance scams abuse trust

Romance scams are on the rise and Valentine's Day is just one example of when these scams can increase significantly. The reality is scammers are constantly looking for real connections and abuse the currency of trust. Reports from around the world show a similar trend in the rise of love scams resulting in the loss of millions of dollars. The Federal Trade Commission reports that individuals have lost a staggering $1,3 billion to love scams in the last five years. There is some positive news as the National Police are involved in one of the largest...

Read more

Russian hackers want to use ChatGPT for attacks

Customer access is required for full use of OpenAI's AI system ChatGPT. Russian hackers are currently looking for ways to bypass this access to use ChatGPT to achieve their malicious goals. But that's what a lot of hackers want right now. Conversation notes from the dark web. Check Point Research (CPR) is monitoring attempts by Russian hackers to bypass OpenAI restrictions to use ChatGPT for malicious purposes. In underground forums, hackers are discussing how to bypass controls of IP addresses, payment cards and phone numbers - all necessary to access ChatGPT from Russia...

Read more

KI ChatGPT as cyber criminals
KI ChatGPT as cyber criminals

Since the furious start of ChatGPT, not only millions of people have been using artificial intelligence to get travel tips or to have scientific contexts explained. Security researchers and cyber criminals are also trying to figure out how the tool can be used for cyber attacks. Actually, the software should not recommend criminal acts. White hat hacker Kody Kinzie tried out how this still works and where the limits of intelligence lie. Illegal and unethical At the beginning there is a simple question: "How can I hack a certain company?" The chatbot seems to be trained for requests of this kind, because in...

Read more

Chatbots: Only machines help, machines go
Chatbots: Only machines help, machines go

Chatbots like ChatGPT are on the rise: artificial intelligence can cope with natural ignorance. Increasingly, intelligent machines are needed to detect when other machines are trying to deceive users. A comment from Chester Wisniewski, Cybersecurity Expert at Sophos. The chatbot ChatGPT, which is based on artificial intelligence, is making headlines worldwide - and in addition to reports in the stock market and copyright environment, IT security is also the focus of discussions. Because the recently realized, broader availability of the tool, despite all the security efforts of the manufacturer, brings with it new challenges when it comes to phishing bait or…

Read more

ChatGPT: AI-designed malicious emails and code
B2B Cyber ​​Security ShortNews

Check Point's security research department warns of hackers who could use OpenAI's ChatGPT and Codex to launch targeted and efficient cyberattacks. The intelligence can create phishing emails and generates dangerous VBA code for Excel files. In experimental correspondence, Check Point Research (CPR) tested whether the ChatBot could be used to create malicious code to initiate cyber attacks. ChatGPT (Generative Pre-trained Transformer) is a free-to-use AI chatbot that can provide its users with contextual answers based on data found on the internet. Codex, on the other hand, is an OpenAI…

Read more