PR and AI: between technology hype and ethics debate

Self-driving cars, personalised recommendation systems, ChatGPT, the Microsoft Copilot - artificial intelligence (AI) has long since established itself as a versatile solution in various industries and areas of life. The technology is far more than just a trend; it is considered an important innovation with great potential.

Faster, more efficient, more data-driven work - the use of AI also brings undeniable benefits to the communications and public relations (PR) profession. But there are also risks, especially in terms of security and ethics.

There are potential dangers in the areas of privacy and data protection, which have already been frequently discussed. My colleague Daniel Junglas summarised how we deal with this when testing AI tools for copywriting:

„If I use GenAI tools professionally, I assess the level of confidentiality of the data and information to be entered beforehand. If it is sensitive data, it has no place in a corresponding input field. This is because every enquiry usually trains the tools further - the data could therefore reappear at any time in another dialogue with third parties. In addition, the legal situation regarding copyright has still not been conclusively clarified. This is another reason why it is advisable not to use AI products thoughtlessly and unchanged.“

Daniel Junglas, Associate Director

What applies to data protection can also be applied to ethics: it is essential to think about this when using AI for communication and PR. After all, as human as some AI systems may already appear, they have no human morals or ethics. They make decisions solely on the basis of data and algorithms. These are systems that have been trained by humans - unconscious prejudices and discrimination cannot be ruled out.

So how confidently can we really use AI in PR and corporate communications? How do we deal with the ethical challenges?

The path to trust: Ethical principles in PR 

In addition to legal requirements, communication work in PR is also subject to ethical standards. For example, the communication code of the DRPR (German Council for Public Relations), which summarises the guidelines for PR, provides orientation: Public relations work is based on principles such as transparency, fairness, truthfulness and a sense of responsibility.

By orientating ourselves on these principles, we as an agency ensure that communication between our clients and the public takes place in a trustworthy manner. This credibility ensures a long-term relationship between target groups, media and stakeholders. 

There is a diverse team behind every fair AI

This is shown by an example from the field of facial recognition: MIT master's student Joy Boulamwini discovered while working with facial recognition software that it had problems recognising her face. The test revealed that it works well with white men, but less well with black women. The reason for this is that an above-average number of white men are involved in the development of the AI, resulting in an algorithmic bias. In the case of facial recognition software, certain characteristics, such as skin tones or facial features, are not taken into account as they do not feature sufficiently in the development team's training sets.

Facial recognition using AI can be a problem if development teams and training data are not diverse. 

We can learn from such scenarios and utilise more diverse data sets and teams. However, we cannot guarantee that the AI will then work completely without prejudice. After all, there are always people behind every AI who train it - and people are never completely unbiased. 

Trust is good, thinking for yourself is better

It is therefore important that we do not blindly trust AI - and we must always be particularly aware of this. There is a major risk of automation bias when using AI. This phenomenon describes the tendency of people to blindly trust the recommendations of AI systems and ignore even occasional errors.

An example from everyday life: the GPS in our car tells us to turn left - and we usually do so without thinking about it. If we then stop looking, we may end up on a dangerous road or in the nearest lake. Although the AI has a data basis, it is better for us to look at the road and assess the situation ourselves. So when using AI, we must always remember to scrutinise and, if necessary, make corrections.

The use of AI in PR - humans must have the final say 

For us as communication professionals, this means that we have to keep reminding ourselves how to deal with AI systems in an ethically correct manner. When evaluating the results, we must always bear in mind that these can be based on prejudices and may contain errors. It helps to think for yourself, rely on your own experience and knowledge, ask experts and check sources. 

This is the value of our work, it makes the people in our agency indispensable employees and AI a useful tool, according to the Oseon management's credo:

“For us, AI is a toolbox, not an independent employee. The success of our service still depends on our industry expertise, the experience of our employees and the agency's specific subject knowledge.” 

Tapio Liller, founder and managing partner

Ultimately, it is up to us how we use this technology and ensure that it is in line with ethical principles. In the future, we will therefore have to deal more with questions of ethics and the correct use of AI. This is the only way we can ensure the positive development of the PR industry in an increasingly digital world. 

Images: Canva