ChatGPT: Only good for fairy tales or your new content guru?

Jason Gerrard, Senior Director of International Systems Engineering at Commvault, discusses the benefits and downsides of ChatGPT and why organisations should proceed with caution when using it.

We have just passed the one-year anniversary of ChatGPT and it is arguably now a household name. Most people are familiar with its uses: generating content, writing software code, and answering any question that you ask.

It comes up with responses by trawling through its own massive database of information at lightning speed. Depending on how you frame your request for content, the output will come back in the form of an article, story, CV, limerick, fairy tale, or whatever you choose – the list is (almost) endless.

However, beneath its remarkable capabilities are major limitations that must be considered before getting carried away with its benefits. One of the biggest problems is its tendency to produce responses that are factually incorrect or misleading.

Although ChatGPT has been trained on vast amounts of data, it lacks the ability to verify the accuracy of information or comprehend subtleties that provide context for how content is perceived. Neither can it incorporate real-time events that provide the most up-to-date information.

Fake news and fairy tales

The reality is that ChatGPT mines its own database for answers and then makes a series of assumptions – effectively guesswork – to create what sounds like highly plausible answers. What’s more, they are written in a polished, articulate way, which gives them an innate sense of credibility.

Therefore, if a user doesn’t verify that answers are correct, then it could quickly become a highly effective way of distributing a false story with about as much integrity as a fairy tale.

With fake news estimated to make up 62% of all internet information, the likelihood that ChatGPT is frequently supplying incorrect information is high. If 100 people ask the same question and get given the same wrong answer then there are potentially another 100 instances of fake information published online, which, because it has been repeated so many times, is likely to be taken by the majority as fact.

An added danger is that ChatGPT can only learn from the data that it has been fed, which can include biases in the original material, whether that’s racial, gender, or any form of discrimination. So, its potential to perpetuate prejudice within fake news could be prolific.

chatgpt
© shutterstock/Ascannio

Transparency is needed when distributing AI-generated content, but it is difficult to see how this could be regulated and enforced effectively.

ChatGPT as the content guru

Despite the downsides, ChatGPT has plenty of positives for businesses when used with care. As an important caveat, the content it generates must always be reviewed and verified by humans both for accuracy and bias.

If there’s any doubt about the legitimacy of the material, then it should never be used or published without further scrutiny. Individuals and organisations must ensure that information generated by AI tools is managed responsibly and doesn’t cause damage to their own business or externally.

With caution still in mind, ChatGPT is particularly useful for companies with limited resources to create marketing and social content. From blog posts to website copy, it can save on time and effort by drafting compelling content across various topics and industries to help draw in new customers. What might have taken hours to compose, or required the skills of an external agency, can be written by the software in a consistent and engaging style in just a few minutes.

Another useful application could be the integration of ChatGPT into chatbots or virtual assistants to provide real-time responses to frequently asked questions, troubleshoot common problems, and recommend products. With its ability to understand and respond to queries, businesses could alleviate some of the burden on human customer service teams and respond to simple queries around the clock.

Moving on to potentially riskier ground, ChatGPT might be a useful aid in preliminary market research for product development. By leveraging its vast knowledge set, this software could uncover pain points and identify trends to help create new offerings.

But again, this information would need to be verified by human experts to avoid investing in new ideas that have been conceived based on inaccurate research. At worst, this could result in significant financial loss and long-term reputational damage.

The moral of the story

While AI tools can be powerful and time-saving resources, they should always be complemented by human understanding and expertise to deliver successful outcomes. It’s better to think of ChatGPT as an AI assistant that can handle certain tasks, provide inspiration for content, and throw up a supply of interesting concepts.

Relying on it to replace human intellect is not an option, or at least not yet. Admittedly, ChatGPT is still in its infancy and, with future enhancements, will no doubt become a much more reliable source. Until then, the moral is that organisations must be aware of its limitations and ensure that internal procedures are in place to guide its use.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network