When famed mathematician and codebreaker Alan Turing proposed the idea of the Turing test in the 1950s, he was pondering if a computer program could be viewed as intelligent. The question was part data science and part philosophy. Can a machine make us believe they are human, and could there be a point in time when it is strangely difficult to know the difference? The latest innovation in AI, ChatGPT, from the OpenAI project, just might have brought us to the point that Turing foresaw, and maybe even feared.
CAPTCHA and Blade Runner
You might be more familiar with the concept of the Turing test from the Blade Runner movies when the investigator is attempting to discern the humans from the non-humans. Some humans are just too robotic, and as it turns out some non-humans are far more capable of assimilating human behavior and it is nearly impossible to know the difference. And in some small way maybe this is the point that we have arrived at where the nuances between humans and the black-mirror reflection of our human behavior that is captured by AI are nearly indistinguishable.
…some non-humans are far more capable of assimilating human behavior and it is nearly impossible to know the difference.
This concept, challenging responses as human or not, might also be familiar to you if you have ever completed an online CAPTCHA. CAPTCHA is a bit of an acronym for the Completely Automated Public Turing test to tell Computers and Humans Apart, which began being used in the early 2000s for everything from online payments to renewing your registration at DMV. Its technique, like all Turing tests, is to leverage the human advantage over computers to recognize subtle patterns and inferences. Simple for a human, which can typically solve a CAPTCHA in 10 seconds, while most computer robots are unable to solve them at all, with the exception of random guessing. Although this has not discouraged data scientists from seeking a solution.
AI vs AI — Is a Turning Test Possible?
The ability of AI to have a more intelligent human-like chat, has stirred controversy and concern from overwhelming congress with computer-generated complaints to grade school essays. Teachers in particular are worried about how they will know if a paper is written by a student or procured by an online chat tool. Combatting this might not be easy as some have resorted to observing students write essays or require they be handwritten.
One solution proposed by Princeton University student Edward Tian, is called GPTZero claims to know the difference between computer and human-written essays. GPTZero has indeed been popular in recent weeks and for a brief period overwhelmed his hosting service. It works like all good generative models by learning from training data and how in particular humans write with “perplexity” and “burstiness”. The latter is variation in sentence length and explaining ideas — some things are obvious to humans, while others require more detail.
GPTZero seems like a good idea and it might come full circle when AI learns how to replicate the idiosyncrasies of humans using a similar generative approach called GAN. GAN or Generative Adversarial Network is a method to develop models that pit one computer against another — the first generating content while the second judges its “realness” based on its knowledge of the real world. Several GAN models have been successful at “unlocking” CAPTCHAs including one from Northwest University (China) researchers. They claim their GAN model performs with >90% success rates solving a CAPTCHA in 50 milliseconds, demonstrating AI can defeat AI.
Like many new technologies, they are often misunderstood and frequently feared (Remember when the microwave was thought to be the most dangerous invention ever? For many cold war families, the idea of putting a radiation machine in your kitchen was ridiculous!) The rapid innovation of generative AI creates an illusion of magic — the wonder of its results along with the distrust and thoughts of deception.
ChatGPT is in its infancy. It is like a child with an incredible ability to quickly learn and adapt. While its critics point to its shortcomings — its lack of human qualities and “burstiness” — we need to remind the analytics community, in computer terms it is only a few days old. ChatGPT will only get better with time, and time will tell if that is a good thing…or not.
NOTE: Clearly my opinions are my own