ChatGPT – will it help content professionals?
24 Mar 2023
4 mins
Now that the initial dust on ChatGPT has settled, I wanted to share my reflections on this technology in the context of Tridion.
How does ChatGPT work?
ChatGPT stands for "Chat Generative Pre-Trained Transformer". It is a language model that can generate natural language when presented with a prompt and is pre-trained on a massive amount of data scraped from the Internet.
It is trained using an approach with both supervised and reinforcement learning techniques. The model can answer questions after being fed a substantial amount of text, but its accuracy, though, will not be sufficient. The model is then further improved by humans, who provide feedback on each response.
ChatGPT can find and repackage information at a speed that humans never could, and it always sounds like a real human being is answering you. It remembers earlier interactions and questions asked and will refine its responses.
Will AI replace subject matter experts?
AI is real and this raises some tough questions about the future role of humans in an increasingly automated world. However, there is nuance to the world. AI excels in pattern recognition and will be able to detect patterns and will be able to apply more of the same. Humans, however, can add nuance to any situation and excel in applying knowledge in new and creative ways.
ChatGPT will not easily replace subject matter experts therefore. It can generate software code, for sure, but it won't take the place of software developers who will continue to be required to use their expertise in novel ways to solve specific problems. ChatGPT can generate a narrative, for sure, but it will not replace the technical writer describing how to operate a complex machine or piece of equipment.
Will it help content professionals?
Now and in the immediate future, Generative AI will definitely be able to assist content professionals and will make their lives easier. Since Generative AI heavily relies on the quality of training data, providing high-quality and diverse training data can help improve the accuracy of the tools using the language model.
To give a few examples, Generative AI will:
- Help writers to improve the quality of the content
- Help writers to overcome their writer block and inspire them by generating some boiler plate narrative, in which certain placeholders might already be filled in
- Help writers to find content suited for reuse and eventually automate that completely
- Help writers to write in the same style throughout the organization, so that everything seems to be written by the same person
- Help writers to structure and tag content, which will remove some of adoption constraints of structured content
- Convert unstructured content into structured content
- Generate the complete narrative in purely data-driven documents, like financial reports
- Supply recommendations on how to improve content based on user behavior and patterns in the different channels
Generative AI can also help people finding that one “right” answer if the model is trained on the “right” content. I do not believe that just throwing substantial amounts of unstructured and uncurated content at AI is sufficient to be able to supply the “right” answer. The quality and correctness of the responses given will only increase if you can train ChatGPT on structured and curated content, ideally semantically enriched content in the way we provide with Tridion. Think about the potential this has! What if ChatGPT were trained on the millions of articles published by the 30,000 medical journals that there are in the world? The only downside today is that to optimize a model as large as ChatGPT, you do need a large amount of data and computational resources.
There are also concerns…
The spread of "fake" information is a true concern though, especially because ChatGPT makes wrong answers sound convincingly right. As a result, people may start to believe that every response provided by ChatGPT is reality.
For example, Stack Overflow, a coding website that has long served as the internet’s go-to Q&A forum for programming advice, has temporarily banned OpenAI’s new chatbot. The forum’s moderators say the site has seen an influx of responses generated by ChatGPT to give convincing, but often incorrect answers to human queries.
I decided to ask ChatGPT itself about potential concerns and got the following answer: "One concern is the potential for these models to perpetuate and amplify biases that are present in the data they were trained on. Additionally, the high quality of text generation capabilities of these models could be used to create convincing misinformation or fake news. Another concern is the risk of over-reliance on these models to make decisions or take actions without proper human oversight. Finally, the computational power required to train and run these models is significant, which could have a significant environmental impact if not managed properly."
Of course, with the recently released GPT-4 model, the accuracy will continue to improve, and it will push the boundaries of what is possible with Generative AI.
To summarize, the potential is enormous. For now, use the technology to your advantage, but always check the accuracy of responses. This is not that different from when you get advice from a real human.
One thing is very clear. The popularity of ChatGPT and its accessibility to the general public will cause an explosion of investment and accelerated development in AI in general. As a result, GPT will be just one of the tools in the AI box for the next generation of intelligent authoring.
If you would like to learn more about how Tridion can help you, click here.