AI is, in many ways, a revolutionary event, similar to what Gutenberg’s printing press was in the 15th century, or radio, TV, and even internet in the 20th. It’s a technology that will permanently reshape society. For many of us, AI means everything we learned from multiple dystopian movies (2001 A Space Odyssey, The Terminator, Matrix, Blade Runner, …). But can AI be used for our own sake? Can we ride the wave of this technology and use AI for our jobs?
We are for sure witnessing the first, still baby steps through ChatGPT, which is, as mentioned previously, a conversational AI language model developed by OpenAI and as media professionals, we should accept that the change will come, and get ourselves ready. Here’s how.
The first step is to learn how to ask questions, how to use the right words in your questions and how to structure them. ChatGPT gives different answers to the same question if asked in different ways. The only way to understand how to do it, is to try, test, and experiment.
ChatGPT is good at simplifying complex topics, academic works and long opinions for a general audience. This does not come from being ‘smart’, it just applies mathematical models on words, extracts the most important words and puts them in a humanly comprehensible order.
It is good for explaining a topic that we need to research before writing an article or for preparing questions for an interview. You can, for example, list questions you already have, and it will create more questions.
You can ask it to give you a summary or an abstract, a META description, or an extract for posting on social media, and it is very good at offering different versions of the same text that you can use for headlines. We all know how we tested headlines on social media and how sometimes that job can be time-consuming. ChatGPT can help. It is not good at counting though, as funny as it sounds. But you’ll notice that soon enough.
For journalists who write in English - ChatGPT is a very good sub-editor – to check articles before they send them to the editor. However, it is still not good for non-English articles, but I am convinced that this will be solved soon too.
The second step is (keeping in mind that ChatGPT is still in its infancy) to check everything it gives as a reply. It can easily be a lie or plagiarism.
The most important negative thing about ChatGPT and other large AI language models, is that they can not be trusted. These models are built to make predictions and give the most likely answer. If ChatGPT does not know the answer, it will not say “I don’t know”. It will give the most likely answer – or simply make it up. It sometimes generates an answer that is just purely wrong, and this especially applies to regions that have less data online. When you ask it to give quotes, or references for an article it generated, it can make everything up. This is a serious problem (also called hallucinating) and even though the creators of LL models are working hard on it, they still cannot solve the issue. OpenAI CEO Sam Altman admitted that despite the anticipation, GPT-4 "is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it."
Another problem ChatGPT has – it is trained on massive amounts of data, but it does not know what is true, it does not know morals, ethics or even different points of view. It does not recognize which sources of information you can trust and which not. It replicates the bias it is trained on. Stephen L. Carter, Bloomberg Opinion columnist explains it well: “Any Large Language Model is in a sense the child of the texts on which it is trained. If the bot learns to lie, it’s because it has come to understand from those texts that human beings often use lies to get their way. The sins of the bots are coming to resemble the sins of their creators.”
Like with every new technology the beginnings are beginnings for everybody, starting from inventors, founders and engineers, to us, ordinary people and the media, and of course institutions and governments. We do not have the luxury of being afraid of yet another dystopian scenario, we do not get to complain or deny. It is up to us to be informed and to learn to use it as much as we can. And the best way to learn is to start using it.