Disinformation in Media
In the final section of our Co-Creation series, we look at another interesting topic: that is how Artificial Intelligence will impact the level of disinformation in the media. Analysing this topic was uniquely challenging for the Co-Creation workshops as the media ecosystem hosts a wide range of individuals in very different roles (mainly journalists and fact-checkers), but all showed a common concern for the risks of AI in their sector. The general perception of AI was more sceptical in the media than healthcare and port management, not just because of the risks posed, but also because there was less confidence in the beneficial potential of the new technology.
Unlike in other sectors, participants from the media did not think it likely that they could be replaced by AI, as they doubted the scale of the applicability of the technology to their field, thereby removing a very common concern. Nevertheless, there was hope that AI could function as a vital aid to their day-to-day work. As AI in the field is currently only capable of processing and analysing data, and not collecting, it could not yet replicate the functions of a journalist or fact checker. This means that journalists will still be needed to obtain information from the real world, but Artificial Intelligence could assist in transforming unorga
inised information into a coherent story, or provide relevant information to fact-checkers during investigations. This reduces the likelihood that human agency will be thoughtlessly abandoned as a result of the incorporation of AI.
Responsibility
Trust in journalism and fact-checking is only as strong as the integrity of the professionals, who need to maintain a track-record of reliability, objectivity and utmost consideration for the truth at all times. This also requires professionals to take responsibility for any work produced under their name, and were it to be simply offloaded, the integrity of their profession would be seriously undermined. AI cannot be successfully implemented into the media environment without adequate consideration for the appropriate delegation of responsibility between the human and the machine, and complete oversight of all AI activity.
Like in health and port management, participants in the media sector also insisted that AI should support their work and not replace it. Not only may its analysis be incorrect, but it was observed that we currently see AI systems fabricating information or relying on a narrow pool of knowledge, which would be disastrous for their sector were this to continue. However, there was less fear that this outcome could actually materialise in their sector due to the nature of their work.
Participants also expressed concern about the responsibility of developers with regard to AI. Especially how they could fail, deliberately or not, to remove biases from their tools, and how this could impact the objectivity of journalism. To mitigate this problem, it was suggested that there be full transparency surrounding the development and manufacturing of AI, but more action would still be required to properly counteract the risk in AI.
Regardless of its function, it was highlighted that the efficiency of AI is dependent on the individual user, who must have adequate training and expertise in order to utilise the tool correctly.
Transparency and Accuracy
Just as transparency is vital for journalists and fact checkers, so too must it be for AI tools used in the media in order to maintain the integrity of the profession. AI systems must be transparent about first, where they derive information from and second, the lines of reasoning the guide conclusions. Additionally, AI systems could not be fact-checked if they were not transparent with the justification for what is distributed, calling into question the accuracy of their systems. There is fear that disinformation in the media sector could rise as a result of AI, if human actors abdicated their responsibility to oversee the reliability of AI.
As mentioned when discussing responsibility, transparency by developers and manufacturers could significantly increase trust in AI tools. However, participants also expressed a need for transparency with regards to how the data that is used to train the AI tools.
Contextual Nuances
Several participants across multiple workshops pointed out that AI generally operates with a large pool of data, whereas journalism and fact-checking generally operates within a specific context. This would give reason to believe that AI may not be so helpful within the media sector. Openness to AI was contingent on it not impacting the accuracy of presented information.
Participants feared that AI systems may not be able to account for the current state of the world when analysing and presenting data, or how this may impact the context of information. This also tied into the issue that AI may not be capable of considering the role that morality plays in the dissemination of media. While it should not undermine the commitment to objectivity, participants noted that morality does and should inform the presentation of information, which a value-less system may not appreciate. Nevertheless, it was appreciated that the objective-focused nature of AI could help compensate for the frequent subjectivity of human actors.
Attitudes towards AI
The general consensus was that AI should function as support to journalists and fact-checkers, and not independently generate content. AI could help with time-saving as the technology continues to improve, as well as evaluate the reliability of sources, aggregate information from a wider reach, and analyse the similarities and differences within samples of data. However, as things stand currently, there is still fear that AI tools are not advanced enough to be employed frequently and risks undermining the quality of journalism and fact-checking.
For more information about the THEMIS 5.0 use case in this sector please subscribe for news updates on our home page.
Comentários