How good or bad is the ChatGPT?


You must have heard of ChatGPT, the new shiny toy everyone has been talking about. While it’s undeniably one of the most versatile tools today, speculations are rife about whether it’s really as “cool’ as it’s being made out to be.

Let’s talk a little about why this tool is making waves.

We know it can generate human-like responses, answer questions, write essays, and develop codes. Students obviously cannot get enough of this tool. They are using it to write college essays without the fear of being detected by plagiarism tools! That has to be the biggest reason for its popularity.

But, all that glitters is not gold. You will realize this if you look hard enough at the features and functions of this new tool.

ChatGPT may provide you with answers to ethical questions, but it’s not devoid of bias. For instance, in response to the question “Who will win the World Cup?”, it automatically assumed that it was the world cup for soccer. It’s this implicit bias which can make the tool generate incorrect answers. An unconscious bias like this will be present even in human conversations. It’s not right to expect machines to respond better than humans. Machine learning doesn’t have deductive reasoning powers yet, in which the bot will ask follow-up questions to give accurate answers.

The chatbot has been grappling with capacity issues because of the high traffic it has started getting ever since its dramatic launch. There are many stories of users needing help to sign up on the website. The waiting time to check back for creating an account can be almost an hour long. The only solution is waiting for a premium version that will pinch your pocket.

Students are now constantly being accused of having used ChatGPT to finish assignments. With time, the AI-driven chatbot will become better at writing text. That would make it twice as hard for evaluators to detect plagiarism.

This bot seems to suffer from gender and racial biases. That’s because there’s already a built-in bias in the tool. OpenAI promptly addressed this shortcoming when users admitted they couldn’t eliminate gender and racial bias during their searches.

Accuracy problems have impacted the chatbot ever since its inception. Even its founders, OpenAI, have admitted that it has limited knowledge regarding world events following 2021. That’s why it has provided replies with incorrect data when there isn’t much data on the web on a topic.

Today, a prime cause for concern is whether ChatGPT will replace human intelligence. What’s the guarantee that companies will need human writers once a tool can do the same job in record time? Studies show that AI can automate nearly 25% of jobs. Students have been using it to complete assignments and write application essays. This means they will resort to cheating and not take the pains of learning the art of writing.

A more significant threat is the likely spread of misinformation. AI experts confess that this tool marks a revolution in usefulness but has zero intelligence. People should verify the information they get from the bot.

As more and more people use the chatbot for unethical purposes, the need for a text detector is growing. OpenAI, the tool’s founders, has introduced a free but faulty tool to address this problem. The new “classifier” tool can identify 26% of the text as “likely AI-written” text and often ends up with false positives. That means it’s declaring human-written text as having been written by AI, which is a bigger disaster.

To sum up, ChatGPT isn’t completely good or bad. It has the power to make our lives simpler and help us with mundane tasks. But, it’s important to figure out its technical details to ensure it isn’t misused or it doesn’t provide false information. Like any Machine Learning and Artificial Intelligence model, it needs to be fine-tuned and tweaked to reach an optimum level of performance.