Silly 2 minutes of reading

ChatGPT, or when the AI starts telling significant lies

Article author :

François Genette

News addict, geek culture fan, digital tech aficionado and hardcore gamer, François Genette is passionate about everything related to digital. A journalist for nearly 15 years in the major national and local media, he now uses his pen to share his discoveries from the worlds he loves.

read more

It is praised to the skies in every domain, it impresses even the most seasoned experts of the digital world and is considered an incredible evolution which will change the ways we work forever. And yet the AI is not flawless. On the contrary, as in certain cases it has a regrettable tendency to err … and that has far-reaching consequences.

The most recent example comes to us from the United States, and more specifically the Texas A&M University. A professor at the American university prevented 15 of his students from obtaining their degrees, accusing them of having used ChatGPT to produce their written work.

To detect the supposed cheating the professor had used … this very same AI, by implementing a very simple process. For every work submitted, he copied the written text into ChatGPT and asked it if it was the source. And the outcome was, for 15 of them, that the response was positive.

ChatGPT, the AI with a slight case of mythomania…

The problem is that the OpenAI chatbot is not only incapable of establishing if a text has been produced by a human hand or if its source is an algorithm, but in addition it has a slight tendency to mythomania.

A concrete example has been provided by our PCMag counterparts, who submitted an article written by one of the website’s editors to ChatGPT. The result demonstrated that the AI attributed authorship of the publication to itself, even though it didn’t even exist at the time the article was written.

In the end, several of the students were able to sort out their situation and were awarded their highly coveted degrees. For one of them, however, things ended less well since he confessed that he had used ChatGPT to produce his end-of-year work.

… dragged to the accused dock in court

Another example of fabulation with far-reaching consequences, this recent anecdote concerns the American radio show host Mark Walters. The presenter has in fact decided to file libel charges against OpenAI. His complaints are scathing, no more and no less accusing ChatGPT of slandering his name.

It all began when another journalist named Fred Riehl asked ChatGPT to provide a summary of a legal matter by making use of a PDF text containing all of the information related to the case.

In the summary thereby generated, ChatGPT cited the name of Mark Walters in maintaining that he was involved in the matter, having embezzled over five million dollars from the funds of a non-profit association which supports firearms. A key point of this summary, the AI identified Mark Walters as the association’s treasurer. 

And therein lies the whole problem! In fact, Mark Walters has not only never held this post, but, moreover, has never worked for this association and the complaint filed as part of this legal matter makes no mention of him. In short, he has quite simply nothing whatsoever to do with this affair.

Following this incident, Walters decided to take the matter before the courts. The complaint denounces the ChatGPT allegations as being ‘false and malicious,’ and the journalist is claiming damages. A world first which the directors of OpenAI could have well done without.

Call for projects

A story, projects or an idea to share?

Suggest your content on kingkong.

Share this article on

also discover