Risks from generative AI

Risks from generative AI

Share post

Generative AI systems like ChatGPT and Copilot have started their triumphant advance and will no longer be able to be stopped. Technologies that can independently create text and images using simple prompts and generate ideas have significantly changed the way we think creatively and solve problems in a short period of time.

Although (generative) AI was initially met with some reservations - particularly when it comes to job losses - it is becoming increasingly clear that this technology can complement, rather than replace, human skills. Generative AI is evolving and becoming more prevalent in the business world. Technology is increasingly being used to increase productivity, drive innovation and optimize decision-making. But as with any new technology, there are not only positive sides. The use of generative AI presents a number of challenges that companies need to address.

Impact on jobs

Without question, the use of artificial intelligence will be able to completely take over repetitive, mechanical and less creative tasks. This makes jobs that always involve the same tasks obsolete. This includes, for example, recurring analyzes and booking processes.

However, new jobs will also be created: the demand for workers who can work specifically with AI systems, control them and use them for special cases will increase. In this context, Sam Altman, CEO of OpenAI, always emphasizes how important human interaction is. Altman says that while generative AI does “some” of the work well today, it ultimately always requires a human. Companies are therefore required to address workplace issues openly. The use of AI offers the opportunity to effectively address the shortage of skilled workers without having to lay off employees. However, it is necessary to actively support changes through further training.

Hallucinated “facts”

An AI knows neither good nor evil. Ethical questions are foreign to her; she regurgitates the data with which she was trained. It can therefore pick up on and even reinforce unintentional biases – making public use a risk. The extent of bias in AI depends on the data used to train the AI ​​model. It is also difficult to use data that is sensitive under data protection law, to obtain permission for data use (copyright) and to create content that violates applicable law (such as generated revealing images of prominent people). In general, making up facts (“hallucinating”) is also a critical problem that current AI implementations have not yet solved.

To address these types of problems, it is important to train AI systems with carefully selected, reliable and diverse data. Additionally, these systems should be regularly monitored and evaluated to identify and eliminate potential biases and misinformation and to ensure that AI systems provide reliable and impartial information. In addition, the transparency of the training data and open communication about possible AI tendencies and errors are crucial - not only for the people using the AI, but for everyone involved.

Legal requirements

Even if the introduction of laws and regulations such as the recently passed EU AI Act is viewed ambivalently by the public, they will come into force and lead to the ethical and responsible development of AI technologies and their implementation. This may slow down some developments - but it is better than if this area remained unregulated. Many companies have not yet dealt with the legal regulations at all, which can lead to unpleasant surprises if they want to bring an AI solution onto the market and only then consider the legal framework.

A solution to this problem is an open discussion between the European Union and the interest groups. Companies developing, implementing or using AI technologies should be able to assess how the adopted regulations would affect their operations. Academic institutions and experts in AI and ethics can also provide valuable insights into the technical and ethical aspects of AI regulations. This transparent, two-way communication also gives stakeholders the opportunity to contribute their expertise and insights to improve the effectiveness of upcoming laws and regulations.

Sustainability and energy consumption

Large AI models require a lot of computing power and therefore a lot of energy. Therefore, AI is often prematurely labeled as an environmentally harmful technology. However, AI also has the potential to make a positive environmental contribution. Generative AI can optimize processes, improve products and enable companies to proactively and efficiently deal with environmental issues. In this way, companies can reduce their ecological footprint and save costs at the same time.

Furthermore, assessing the sustainability of AI should depend not only on energy consumption, but also on how that energy is produced. Companies should prioritize green, renewable energy sources and smart applications to limit environmental impact. If AI only uses electricity generated from renewable sources, the efficiency gains achieved by AI and the reduction in resource use lead to a positive overall balance.

Conclusion

Like any new technology, the use of generative artificial intelligence also requires a thorough risk analysis. In the four areas mentioned: “Impact on jobs”, “Ethical implications”, “Legal framework” and “Environmental aspects”, companies are required to develop and implement strategies. You can follow pioneers in the industry or work with a consulting company that specializes in this area. Then nothing stands in the way of productive and efficient use of AI. (Ulrich Faisst, CTO for the Central Europe region at Cognizant)

More at Cognizant.com

 


About Cognizant

Cognizant develops modern companies. We help our customers modernize technologies, redesign processes and transform experiences – so they stay at the forefront of our rapidly changing world. Together we improve everyday life.


 

Matching articles on the topic

IT security: NIS-2 makes it a top priority

Only in a quarter of German companies do management take responsibility for IT security. Especially in smaller companies ➡ Read more

Cyber ​​attacks increase by 104 percent in 2023

A cybersecurity company has taken a look at last year's threat landscape. The results provide crucial insights into ➡ Read more

Mobile spyware poses a threat to businesses

More and more people are using mobile devices both in everyday life and in companies. This also reduces the risk of “mobile ➡ Read more

Crowdsourced security pinpoints many vulnerabilities

Crowdsourced security has increased significantly in the last year. In the public sector, 151 percent more vulnerabilities were reported than in the previous year. ➡ Read more

Digital Security: Consumers trust banks the most

A digital trust survey showed that banks, healthcare and government are the most trusted by consumers. The media- ➡ Read more

Darknet job exchange: Hackers are looking for renegade insiders

The Darknet is not only an exchange for illegal goods, but also a place where hackers look for new accomplices ➡ Read more

Solar energy systems – how safe are they?

A study examined the IT security of solar energy systems. Problems include a lack of encryption during data transfer, standard passwords and insecure firmware updates. trend ➡ Read more

Shadow IT becomes Shadow AI

The path for users to AI is very short; entry is gentle, easy and often free of charge. And that has ➡ Read more