Generative AI systems like ChatGPT and Copilot have started their triumphant advance and will no longer be able to be stopped. Technologies that can independently create text and images using simple prompts and generate ideas have significantly changed the way we think creatively and solve problems in a short period of time.
Although (generative) AI was initially met with some reservations - particularly when it comes to job losses - it is becoming increasingly clear that this technology can complement, rather than replace, human skills. Generative AI is evolving and becoming more prevalent in the business world. Technology is increasingly being used to increase productivity, drive innovation and optimize decision-making. But as with any new technology, there are not only positive sides. The use of generative AI presents a number of challenges that companies need to address.
Impact on jobs
Without question, the use of artificial intelligence will be able to completely take over repetitive, mechanical and less creative tasks. This makes jobs that always involve the same tasks obsolete. This includes, for example, recurring analyzes and booking processes.
However, new jobs will also be created: the demand for workers who can work specifically with AI systems, control them and use them for special cases will increase. In this context, Sam Altman, CEO of OpenAI, always emphasizes how important human interaction is. Altman says that while generative AI does “some” of the work well today, it ultimately always requires a human. Companies are therefore required to address workplace issues openly. The use of AI offers the opportunity to effectively address the shortage of skilled workers without having to lay off employees. However, it is necessary to actively support changes through further training.
Hallucinated “facts”
An AI knows neither good nor evil. Ethical questions are foreign to her; she regurgitates the data with which she was trained. It can therefore pick up on and even reinforce unintentional biases – making public use a risk. The extent of bias in AI depends on the data used to train the AI model. It is also difficult to use data that is sensitive under data protection law, to obtain permission for data use (copyright) and to create content that violates applicable law (such as generated revealing images of prominent people). In general, making up facts (“hallucinating”) is also a critical problem that current AI implementations have not yet solved.
To address these types of problems, it is important to train AI systems with carefully selected, reliable and diverse data. Additionally, these systems should be regularly monitored and evaluated to identify and eliminate potential biases and misinformation and to ensure that AI systems provide reliable and impartial information. In addition, the transparency of the training data and open communication about possible AI tendencies and errors are crucial - not only for the people using the AI, but for everyone involved.
Legal requirements
Even if the introduction of laws and regulations such as the recently passed EU AI Act is viewed ambivalently by the public, they will come into force and lead to the ethical and responsible development of AI technologies and their implementation. This may slow down some developments - but it is better than if this area remained unregulated. Many companies have not yet dealt with the legal regulations at all, which can lead to unpleasant surprises if they want to bring an AI solution onto the market and only then consider the legal framework.
A solution to this problem is an open discussion between the European Union and the interest groups. Companies developing, implementing or using AI technologies should be able to assess how the adopted regulations would affect their operations. Academic institutions and experts in AI and ethics can also provide valuable insights into the technical and ethical aspects of AI regulations. This transparent, two-way communication also gives stakeholders the opportunity to contribute their expertise and insights to improve the effectiveness of upcoming laws and regulations.
Sustainability and energy consumption
Large AI models require a lot of computing power and therefore a lot of energy. Therefore, AI is often prematurely labeled as an environmentally harmful technology. However, AI also has the potential to make a positive environmental contribution. Generative AI can optimize processes, improve products and enable companies to proactively and efficiently deal with environmental issues. In this way, companies can reduce their ecological footprint and save costs at the same time.
Furthermore, assessing the sustainability of AI should depend not only on energy consumption, but also on how that energy is produced. Companies should prioritize green, renewable energy sources and smart applications to limit environmental impact. If AI only uses electricity generated from renewable sources, the efficiency gains achieved by AI and the reduction in resource use lead to a positive overall balance.
Conclusion
Like any new technology, the use of generative artificial intelligence also requires a thorough risk analysis. In the four areas mentioned: “Impact on jobs”, “Ethical implications”, “Legal framework” and “Environmental aspects”, companies are required to develop and implement strategies. You can follow pioneers in the industry or work with a consulting company that specializes in this area. Then nothing stands in the way of productive and efficient use of AI. (Ulrich Faisst, CTO for the Central Europe region at Cognizant)
More at Cognizant.com
About Cognizant
Cognizant develops modern companies. We help our customers modernize technologies, redesign processes and transform experiences – so they stay at the forefront of our rapidly changing world. Together we improve everyday life.
Matching articles on the topic