News

Latest news about B2B cyber security >>> PR agencies: add us to your mailing list - see contact! >>> Book an exclusive PartnerChannel for your news!

Risks from increasing use of artificial intelligence
Risks from increasing use of artificial intelligence

A report shows that 569 TB of corporate data is passed on to AI tools and underlines the importance of better data security. Enterprise AI/ML transactions increased from 521 million monthly in April 2023 to 3,1 billion in January 2024. With 21 percent of all AI transactions, manufacturing generates the most AI traffic in the Zscaler Cloud, followed by finance and Insurance (14 percent) and services (13 percent). The most popular enterprise AI/ML applications by transaction volume are ChatGPT, Drift, OpenAI, Writer, and LivePerson. The five countries with the most AI transactions in companies are the USA, India, Great Britain,…

Read more

One year of ChatGPT
B2B Cyber ​​Security ShortNews

The current boom in artificial intelligence is leading to a sharp increase in demand in the German market. Spending on AI software, services and related hardware is expected to rise to 6,3 billion euros this year. This corresponds to an increase of 32 percent compared to 2022, when 4,8 billion euros were spent on artificial intelligence. The digital association Bitkom reports this based on data from the market research company IDC. “The launch of ChatGPT a year ago was an initial spark for the use of AI. ChatGPT showed many people for the first time what AI can do today...

Read more

ChatGPT: Risks of professional use
Kaspersky_news

Many Germans use ChatGPT in their everyday professional lives. This can jeopardize the security of sensitive data. According to a representative survey, almost half (46 percent) of working people in Germany use ChatGPT in their everyday work. The popularity of generative AI services and Large Language Models (LLM) poses the question to companies of the extent to which they can trust language models with sensitive company data. Kaspersky experts have identified these data protection risks of professional ChatGPT use: Data leak or hack by the provider: Although LLM-based chatbots are operated by large tech companies, they are not immune to hacking attacks or accidental data leaks. There was already one…

Read more

Jailbreaking AI-based chatbots
B2B Cyber ​​Security ShortNews

The cybersecurity company behind the exposure of WormGPT has published a blog post. This provides information about the strategies used by cybercriminals who “jailbreak” popular AI chatbots like ChatGPT. This refers to tactics that circumvent the security limits that companies impose on their chatbots. SlashNext researchers have found that cybercriminals don't just share their successful jailbreaks on discussion forums to make them accessible to others. Instead, developers are also promoting AI bots that can be used for criminal purposes. They claim these are custom language models (LLMs). SlashNext has confirmed that this is in…

Read more

Artificial intelligence in IT
B2B Cyber ​​Security ShortNews

The year 2023 could go down in history as the year of artificial intelligence (AI) – or at least the year that businesses and consumers alike raved about generative AI tools like ChatGPT. IT security solution providers are not immune to this enthusiasm. At the RSA Conference 2023, one of the leading international conferences in the field of IT security, the topic of AI was addressed in almost every presentation - for good reason. AI has enormous potential to transform the industry. Our security researchers have already identified the use of AI by hackers...

Read more

Good AI, bad AI
B2B Cyber ​​Security ShortNews

According to security researchers in Israel, hacker gangs are already using artificial intelligence (AI) to train their new members. They are also assisted by AI to improve malware, automate attacks and generally run their criminal operations smoothly. On the other hand, the management consultants at McKinsey found out through a survey for the year 2022 that 50 percent of the companies surveyed use AI to support work in at least one business area - in 2017 it was only 20 percent. According to Forbes Magazine, 60 percent of the entrepreneurs surveyed believe…

Read more

Compliance with AI in the company
Compliance with AI in the company

The almost limitless possibilities of LLMs (Large Language Models) are incredibly exciting. Each month there are new uses for these tools that are not always compatible with corporate compliance policies. The possibilities range from quickly and inexpensively creating thumbnails for blog posts, to the behavior of people who fundamentally oppose and criticize a blog post, to coherently translating blog posts into other languages ​​- while fully understanding the content. Even in the world beyond blog posts, this nascent tool holds tremendous…

Read more

EU Parliament: New AI law should regulate their use
EU Parliament: New AI law should regulate their use - Image by Gerd Altmann on Pixabay

The EU's new AI law is intended to ensure the controlled use of the all-purpose tool artificial intelligence - AI. Of course, there are great uses for the technology, but there have been numerous examples of unethical uses — including the misuse of deepfake technology. There have been other dangerous AI-related incidents related to privacy, fraud and information manipulation. These cases have shown that AI is not a technology that can be retrospectively regulated by law. The approval of this draft by the EU Parliament creates a solid basis for the future…

Read more

Accomplice AI: Theft of identity data 
Accomplice AI: Theft of identity data - Image by Eric Blanton from Pixabay

Identity data has always been one of cybercriminals' favorite loot. With their help, account compromises can be initiated and identity fraud committed. Now ChatGPT & Co are also helping with perfect phishing emails. A statement from Dirk Decker, Regional Sales Director DACH & EMEA South at Ping Identity. The attackers usually use social engineering and phishing. The success rate of such attacks, mostly based on sheer mass, is limited. Individualized emails and messages tailored to a victim offer significantly higher success rates, but also require significantly more work...

Read more