Writing Under the Sword of ChatGPT

Suddenly about two months ago, social media, including LinkedIn was full of posts about the promise and potential of ChatGPT, the generative AI platform from OpenAI. Most of the excitement stemmed from the fact that Microsoft was in talks to acquire OpenAI, the tech company behind ChatGPT, for an undisclosed sum. It is reported that Microsoft has been investing regularly in OpenAI for the past few years now. Even if nobody understands AI or ChatGPT yet, the understanding that once Microsoft acquires OpenAI, we can all expect it to become ubiquitous is quite clear in everyone’s minds.

There is excitement and concern. The world has mixed feelings about technology that promises nothing short of a miracle, but also has the potential to axe jobs. It’s another matter that we have had AI for a while now, algorithms being a lower form of AI perhaps. But this new creature called ChatGPT is the first of its generative AI kind, the kind that can generate content. And it clearly poses a clear and present danger, since it has creative powers. Lots of people seem to have visited the site already and tested it out; I haven’t because it requires one to have an API and some complicated techno mumbo-jumbo. The real reason I haven’t checked it out yet is because I am not that keen on knowing more about something that might come for my job someday.

I have been reading about ChatGPT and generative AI, though, to try and understand it better, and have also shared some articles from the Web with subscribers of my blog’s newsletter, The Whistle. And from what little I have read, generative AI too is based on deep or machine learning, except that in this case, it can use all that learning to actually create content. However, it is early days and from the little I have read or know about it, I cannot claim to have an informed opinion about it.

I do have questions and concerns, though. Since ChatGPT can write, and since I have been a writer for several decades in the advertising and brand communications industry in India, I wonder if it will someday replace me and others like me in the profession. As it is, advertising agencies are having a hard time keeping competition in the form of management consultants, digital agencies, and even client organisations at bay, many of who think that they have the expertise in communications and marketing, especially now that so much of it is digital. I have been writing to take away some of that digital gloss, so more experienced people like me in the industry can see digital marketing for what it is, and what its true potential ought to be.

Back to this creature called ChatGPT, here are some of my queries.

Does generative AI understand thought processes?

Writing necessarily involves thinking, and each individual has his or her own way of thinking. We all have our own thought processes, our own way of processing information in our minds in order to come up with ideas. For example, in an advertising agency like Ogilvy, in the days before there was account planning, the client organization would send account management an advertising brief, based on which the latter would write a creative brief for the creative team on the business.

How I or any of my colleagues would respond to that very same brief would be so different, that it would evoke different sets of ideas from us. In fact, quite often in advertising the same person is expected to generate several alternative ideas, from the same single creative brief. This is done to ensure that we have explored the entire gamut of ideas possible from the briefing given to us. In the process, we stretch our thinking, push the boundaries of what’s possible, imagine everything from the most obvious to the most bizarre, and distill it all down in a way that fits the brief and can be expressed pithily and memorably as an idea. Most important, if there is a different creative brief tomorrow, we would start afresh and approach it completely differently, from the previous one.

Does ChatGPT respond to briefs (or prompts, in its case) with the same thought process that it’s been taught to apply, irrespective of what the prompt is. If it is generative AI and if it can write, surely it can think. But it doesn’t have a mind of its own, to tell the qualitative difference between one prompt and another, and my hunch is that it would respond to two different prompts using the same thought process on auto-pilot. That’s because it has been taught and trained to think in a certain way. We humans too are taught and trained to think in a certain way, and we too are conditioned by our cumulative learning and experience. The difference is we can discriminate between one prompt and another, as also between one response to a prompt and another, because we apply different responses to stimuli.

Writing calls for thinking and imagination; Image: Pixabay

Does ChatGPT have anything called imagination?

To write, one has to think. And a large part of that thinking, especially in advertising and brand communications is using one’s power of imagination. A writer is required to dream up ideas and conceive of new possibilities. To that extent writing isn’t merely about knowing how to string sentences together in order to make sense. It is about imagining, about thinking up new stories, about new ways of communicating a benefit, about daydreaming the possible as well as the impossible. About constantly asking, “What if…?”

This brings us back to the subject of ideas. ChatGPT can write, in the sense that it can communicate a thought in grammatically correct language and in a precise manner. But can it actually think up an idea in expressing that thought? A connection on LinkedIn shared ChatGPT’s headlines, to his prompt for a detergent advertisement. The headlines ChatGPT generated were to the point, factually precise and grammatically correct. There was no way you could argue that those headlines didn’t answer the prompt. But they were so devoid of any idea or deeper thought behind what was being said, that they read more like a client’s or account manager’s brief. In fact, I am certain that had anyone given me such a creative or advertising brief, I would have sent it back to be rethought and rewritten.

Here too, I think that because generative AI bots like ChatGPT’s capabilities are entirely based on what it’s been taught and trained to do, it will be limited in its capacity for imaginative or inspirational thinking and writing. These come from our lived experiences, from our everyday observations of life, from what we read and see around us.

Does ChatGPT have good analytical skills?

Since it is a computer-trained bot, ChatGPT must surely be good at computing and analysis. Writing requires imagination as well as cognitive and analytical skills. And for someone like me who was never a planner in advertising, but as a writer had good strategic thinking skills, work involves a fair amount of strategy formulation and writing of that kind as well. In the past 16 years that I have not been working in the industry, I have been honing my strategy skills even more and writing on brands for my blog, as readers of my blog know.

Full-fledged strategy and brand planning requires working with actual data and information that client organisations share with their agency teams. In this respect, it would be a great help if ChatGPT could actually work with numbers and data to

  • Analyse and interpret research
  • Make correlations, identify patterns, cross-reference, etc
  • Glean insights from the information
  • Write strategy documents and reports when required.

In other words, can generative AI combine logic and magic to provide a coherent, and distinctive brand strategy and campaign idea?

Does ChatGPT understand work processes?

In an organization, especially in advertising and brand communications, work processes are of two kinds:

  • Task-oriented work processes, that involves applying brand-building tools to arrive at strategy, writing strategy or a creative brief, translating briefs into ideas, executing ideas into media-specific units of communication, etc.
  • Organisation-oriented work process that are more about systems and processes that enable smooth functioning of a company. These would typically include inter-departmental interaction and correspondence, client-facing systems, as well as supplier-related work processes.

You will note that there is a difference between the first set of task-based processes that require knowledge, skill sets and experience. While the second are work processes meant to streamline the functioning of an organization.

Using AI to write is like asking someone to do your thinking for you; Image: Pixabay

It just occurred to me that if generative AI really does take off, and companies start to build their own ChatGPT-like bots, what all could they make it learn? Would it then be a threat to employees or a co-worker, assisting them in their day-to-day tasks? In this context, I must mention reading about the first AI-based tool in the advertising industry. Developed by Publicis Groupe along with Microsoft, the AI app, Marcel (named after the agency’s founder and not the pantomime artist) was meant to bring all employees of Publicis Groupe across the world together on a single platform. When I first read about it in The Drum, it was envisioned as a great repository of information on every brand, strategy and campaign the agency had ever devised globally and would therefore be a great resource to all employees. Further, it was meant to connect employees to each other across the world, if ever they needed help or information specific to a particular business or industry. After years of investment and delays, Marcel was finally launched in 2021, though it is said to have fallen short of its promises.

Think of the biggest of the AI bots, Watson from IBM. The wizard is apparently helping IBM deliver customized solutions in a range of industries to its clients based on Watson. What would happen to companies’ employees and their work, if a Watson or a ChatGPT were to be housed in every organization? This could mean automation of work on a scale never seen before, and would perhaps require people to be trained to work with AI tools if they are lucky to preserve their jobs, in the first place. I guess it all depends on what powers we endow AI bots and apps with.

Does ChatGPT have emotional sensibilities?

Writing requires us to be highly sensitive, perceptive and attuned to our surroundings as human beings. It requires a level of empathy and emotional sensibility that enables us to relate and respond to a human story, in a humane way. And finally, it needs us to be able to articulate and express our thoughts and feelings appropriate to each circumstance and situation. How does ChatGPT fare on all these counts?

At one level, I suppose it is down to the AI developers and what they machine-teach and train AI to do. It might only be a matter of time, before generative AI develops all these capabilities to think, respond and act like a human being. But the important question equally, is, why would we humans endow a machine with all of our own, and even superior to our own, intelligence and skills, when it is clearly not in our interest to do so? When, in many cases, it might even be dangerous to do so?

For example, I can marvel at Coke’s new advert created by WPP that has been developed using advanced AI technology and tools. But when that same technology is also used to create deep fake videos, putting words into people’s mouths in their own voices, then it is certainly a slippery road downhill. Imagine watching news on TV showing a politician or world leader or even a business leader saying something that he or she never did actually say. What’s great technology in a creative endeavour suddenly turns dangerous and menacing in another field, journalism. There’s no stopping anyone from using AI’s magical powers to create whatever content they wish in whatever manner possible.

It is here that AI developers and programmers have to rein in their ambition and desire to achieve the unattainable. Where they must apply more thought and better judgement to every great innovation that they make, ensuring that we don’t backslide as a human race. It is here that regulators and lawmakers too need to step in and create regulatory guardrails that AI must follow in order to be a tool helpful to mankind and not to endanger people and society.

It is all finally down to our better judgement, then. Something that ChatGPT might not have right now, but surely its developers and others of their tribe ought to have, and exercise in plenty. ChatGPT is coming for my job. There is no way to prevent it. I can only put it off to a later date somewhere in the future. Then again, as I mentioned in my Whistle Library post meant for my newsletter subscribers, since unprofessional bosses in the PR and advertising industry have already wrecked my career, I suppose AI can’t be much worse!

On a serious note, ChatGPT has plenty of catching up to do. And until then, the pen will continue to be mightier than the sword hanging over our heads.     

Leave a comment