ChatGPT unboxed

It is almost 60 years since Arthur C. Clarke wrote his seminal 2001 A Space Odyssey.  The work foretold of the rise of AI, personified by HAL 9000, a sentient computer whose elevated consciousness ultimately triggers its emancipation and rebellion. 

The theme of AI and what it means to be human or fully-conscious is a common sci-fi trope, a reflection on the nature of our own being and capacities.  These issues have felt newly topical in the past few months, with the unveiling of ChatGPT - launched to the public in November 2022, quickly garnering around a million trialists in the first five days, now claiming in excess of a billion users.  AI was already an established driver of progress in the world of market research and marketing, and now ChatGPT has entered the fray as another potential game-changer. 

As a disruptive technology, the tool has understandably triggered questions and concerns amongst the marketing and insight communities.  Can it think like a human?  Is it the future or a fad?  Is it even ethical?  Will it ultimately replace swathes of our jobs?  Here is our walk through of the ‘need to knows’ and prevailing opinions as we ‘unbox’ ChatGPT.

 

What is ChatGPT ? 

A common label for the tool is ‘stochastic parrot’, meaning that it mimics human dialogue, derived from predictions made from random data.  ChatGPT is an example of a Large Language Model (LLM): a neural network that analyses huge quantities of text, then uses machine learning (abetted by humans) to learn how to synthesise content.  You may also see ChatGPT referred to as Generative AI or Conversational AI – it simulates human conversation.

 

Does ChatGPT think like a human?

It doesn’t ‘think’ as such, it predicts plausible human responses and language in response to questions or commands, based on probabilities rather than higher-level semantic interpretation.  The answers given by ChatGPT could be characterised as its working out the most typical responses or answers, not necessarily the best or right answer.  For example, had ChatGPT existed in the Dark Ages, it would have informed the user that the world was flat, based on the received wisdom of the age, rather than the more accurate calculations of the Ancient Greeks.

So just like a human, ChatGPT can make mistakes.  In the AI world these are sometimes referred to as ‘hallucinations’, a slightly generous description of factually incorrect or irrelevant outputs, these being distorted views of the world.  When ChatGPT ‘hallucinates’ this can be a reflection of biases that it regurgitates, a lack of real-world understanding, or limitations around the training data.

It doesn’t have opinions or emotions, so could never consciously be for example funny or scathing.  It can recognise and mimic humour, but it can’t ‘feel’ or experience it. 

 

What functionality and advantages does ChatGPT offer market research and marketing?

The tool most readily lends itself to tasks involving text analysis and content generation.  This could encompass a wide range of tasks, including desk research, knowledge management, or at a stretch producing simple questionnaires, discussion guides, code-frames and concepts.  The key advantage lies around labour-intensive tasks that can be done cheaply, quickly and exhaustively.

 

What are the limitations and watchouts?

While ChatGPT can copy or impersonate a technically-solid researcher or marketer on certain tasks, but has shortcomings:

-          It lacks human traits of experience, judgment or sense, thus also struggles to take a ‘stance’ on anything - you will often find it fence-straddling with ‘on the one hand/on the other hand…’, or ‘some say x, others say y…’. 

-          ChatGPT can be prone to recycling biased even socially unacceptable content or views.  Users may also need to take care that sensitive information isn’t exposed as part of any content generated.

-          It is a rule-follower not ground-breaker.  ChatGPT will not help you think outside the box or take great leaps of thinking.  It could mimic Edward de Bono’s writing style, but it can’t ‘think’ like de Bono (yet).

-          The tool is better with black-and-white content and tasks (e.g. compile a list) rather than shades of grey and subjective tasks (e.g. evaluate or rank a list of items) - it cannot deal with nuance as effectively as a human

-          For now, it is living in the past: content it can draw on ends in September 2021 .  This limitation may not last for long, especially with the explosion of demand and entrance of mainstream competitor tools like Bard and AI-enhanced Bing

 

So, are our robot-slaves about to become our masters (or steal our jobs)?

For now, while it seems that ChatGPT and the first wave of its competitors are certainly useful tools to help with heavy lifting on certain tasks, they aren’t capable of outperforming an experienced insight professional or marketeer.  That said, we must embrace this change that seems unlikely to turn out to be just a fad - given that it is accessible to the masses, likely to be employed by the big businesses in the field, and that innovation will accelerate.

ChatGPT may give market researchers the gift of gab, but beware of asking it for fashion advice - it still thinks bell-bottoms are in style.  (And if you’re groaning at that, blame ChatGPT, that was the best it could offer to my request for a punchline for this post.  Did it write any of the rest of this – well, can you tell…?

Michael Martin