What ChatGPT Tells Us About Misinformation: The Black Mirror Effect

Since the introduction of ChatGPT3 last November and its protégé ChatGPT4 last month, there has been frenzied anticipation about how this form of AI will transform all of us in nearly unimaginable ways. While ChatGPT has been impressive in its capability it has also demonstrated the limits of its knowledge by providing incorrect or misleading information.  For this, we only have ourselves to blame.

The technology behind ChatGPT is built on the premise that by studying the vast amount of information from the internet, it can replicate similar answers to questions in a human-like written response.  It is a brilliant approach, yet shows the flaws and perhaps the pervasiveness of misinformation.

Mirror, Mirror

Misinformation comes in many forms.  Some of it truly is misunderstanding or speculation of knowledge we do not yet know, like the limits of space travel or how to cure certain diseases.  Other misinformation is purposely deceptive and created to mislead, be divisive, or corrupt known concepts (consider the example of flat earth theory or divisive online personalities). 

Elegantly, ChatGPT is an engineering sculpture of human knowledge with all of its perfect imperfections.

ChatGPT shares information, accurate or not, by providing coherent and authoritative responses.  It can tell you something that is completely wrong written in a way that may have you doubt your own knowledge.  Which is a bit troubling.  It is unaware of its own accuracy of the information, again, because it relies on what it has “learned” from humans.  Elegantly, ChatGPT is an engineering sculpture of human knowledge with all of its perfect imperfections.

From foreign states disrupting their adversaries to late-night TV promotions, the creation of deceptive data and information is abundant.  Yet, what really gives misinformation “legs” to spread around the globe is us.  Staring into the black mirror of our phones, clicking on shares, likes, and up-votes, we (unknowingly?) spread misinformation.  What others see and believe is the same as what ChatGPT sees…and believes.  For it, the mirror is the truth.

Misinformation Oversight

Who. Will. Be. Responsible. (Enough periods to make a point?)  It of course begs an AI ethical question. Will there be someone (something?) that regulates and monitors AI and its various chatbot minions?  Who will do this and how will it be done?  As we have seen there are many political interests at play when it comes to misinformation.  And isn’t the great value of the all-knowing internet’s promises of the past, that with more information free or almost freely available to everyone, the possibility to nearly know the answer to everything practically instantaneously, that we would all be more informed data citizens?  That we would be smarter, and less often led astray from the truth?  How have we not gotten as far as we thought possible decades ago?

Data Duped Defense

AI oversight for ChatGPT and other uses is a good approach, but it only goes so far.  The best approach is for us all to develop our own version of a “data defense superpower”, where we learn to be data skeptics and when appropriate, data believers.  We need to refine our abilities to recognize the improbable, the deceptive, and the too-good-to-be-true claims.  Some misinformation uses data and attention-grabbing headlines, while others avoid data and facts altogether.  The latter should prompt doubt and perhaps begin to question what and why we are seeing it.

Can AI and Chat learn to avoid or at least detect misinformation?  Can it discern between fact and opinion?  The answer is yes but it requires a deliberate effort by its engineers and data scientists.  

ChatGPT is more than a clever parlor game and is a notable advancement in AI.  Bill Gates has called the technology as important as the invention of the PC itself.  While conversational AI can be transformative and arguably good, we need to remain skeptical of misinformation.  Like advancements in the past that brought more information to people’s fingertips using Google searches and online resources such as Wikipedia, it has not reduced the amount of misinformation. We are all vulnerable to misinformation and need at times to practice our data defenses to avoid being data duped.

Note: My opinions are clearly my own.

Derek W. Gibson coauthored Data Duped: How to Avoid Being Hoodwinked by Misinformation along with Jeffrey D. Camm. For more read our blog at DataDuped.org

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s