AI Entrepreneurs in Serious Need of Child Protection Lock

It’s hard to believe that half of 2025 has just flown by. At the start of the year the two most anticipated developments of 2025 were going to be Trump tariffs and a wave of AI innovations. You could say that these phenomena are opposed to each other, with the first slowing down the economy and the second promising to boost productivity and growth manifold. I am not sure they will cancel each other out, but it appears that both have been progressing in fits and starts.

Trump has been announcing tariffs and then pausing them or rolling some of it back, so it’s hard to tell where the American economy and the global economy are headed. Raised tariffs’ effect on consumption and on inflation cannot be deciphered yet, since we are still in the 90-day pause on reciprocal tariffs, and only the 25% tariff on aluminum and steel imports into the US – now raised to 50% – and 25% tariffs on automobiles imports have taken effect. That said, the 145% tariff on Chinese imports was ridiculous and renegotiations have lowered them somewhat. The problem is with advanced chips and semiconductors as well as rare earth minerals that are needed in large quantities for the consumer electronics industry as well as cars, especially EVs worldwide. This is where US and China hold the keys, and as the world’s two largest economies they ought to come to an agreement soon.

It has an effect on AI as well. As Nvidia chief Jensen Huang pointed out to the US administration, the exorbitant tariffs, as well as the chips bans, affect Nvidia’s business in China and elsewhere. Apple’s Tim Cook too has been trying to impress upon the US administration the importance of China as well as India to their iPhone manufacture as well as the importance of these markets. Coming to AI, it has been reported that there was disappointment over Apple not releasing any new AI feature, but Meta is unstoppable it appears. Mark Zuckerberg is said to have invested hugely in Scale AI, and it is believed to be a massive advance in general intelligence AI or superintelligent AI. I am sure Google, Microsoft, Amazon and OpenAI are not very far behind in their race to develop the most capable AI there is on the planet. At the same time, it is being reported that AI adoption by companies and organisations is slower than earlier thought.

So, it seems slow going on AI this year perhaps. Which might be just what the doctor ordered, since it is threatening to make some industries such as advertising, extinct! Already, 2025 has begun with a spate of massive layoffs in the US and elsewhere, mostly but not confined to the tech industry itself. Mark Zuckerberg’s new superintelligent AI is said to make advertising agencies redundant. Global advertising agencies meanwhile have been so spooked by the AI phenomenon that their FOMO is leading them to go overboard in their AI investments, if news reports are to be believed. They are also merging and consolidating agencies to manage costs, thinking AI to be the holy grail that will transform their business and their relationships with clients. Not realizing that if clients want AI so bad, they will invest in it on their own, hastening the demise of the advertising and brand communications business.

I am not actively working in the advertising and brand communications industry anymore, being unemployed for almost two decades in India, but just reading these media reports and watching advertising agencies go overboard on AI, it strikes me that our business and perhaps many others are being led up the garden path. The advertising industry was already in a whole lot of trouble – most of it self-inflicted – and AI is not to blame, but AI is not a solution or panacea either!

I have been writing about AI now and then on my blog, especially on this generative AI aspect of it which has become hugely popular. It is the worst form and application of AI in my opinion, and although I am not a techie, I do know that all of it is designed to give us assistants, galore! The virtual assistant form of AI is not new considering it has been around in iPhone’s Siri and Amazon’s Alexa. It is just that ChatGPT and whatever else follows are more capable assistants, and can do all our work for us. Then they tell us AI is not here to take away our jobs, it is here to “collaborate” (the newest buzzword doing the rounds) with humans. The latest invention is AI “agents” and I have yet to read more about what exactly these new forms of intelligence do.

Assistants, collaborators and agents galore; Image: Pixabay

We are having new inventions foisted on us every day by AI pioneers, innovators, engineers, etc. as if they are the new masters of the universe. The future is the world they have ordained through their superintelligent AI powers and we humans are simply minions in this brave new world. It is time we humans exercised our powers of thinking critically and questioning just how we allowed things to get so out of hand.

As I said, I am not a techie and am certainly not in awe of AI the way many people in my industry seem to be. I did some reading just to understand the evolution of AI as a technology and it would surprise you to know that the first attempt at anything like AI was by the British mathematician and scientist, Alan Turing who simply wanted to know if machines can think like humans can. This was way back in 1950. I did read Turing’s paper (believe it or not!) after reading Ian McEwan’s book, Machines Like Me, which I have even written about on my blog. I can never be sure if what I downloaded from the internet and read is the original article, but it used the concept of imitation and clones to explore if machines can think.

The big milestone event in the development of AI was the 1955-56 AI conference or workshop at Dartmouth College in the US led by four scientists, two of whom were from the corporate sector – IBM and Bell Labs – and two were from academia, John McCarthy from Dartmouth College and Marvin Minsky from Harvard University. They wished to meet and work together for two months during the summer of 1956 on areas that they thought were critical to the understanding of machine intelligence. It appears that they and other computer scientists working in this specialized area were making good progress through the 1960s in the US. There seems to have been a slowdown of sorts during the 1970s and the 1980s as many experiments with AI didn’t yield the desired results, and there was a funding winter as well during these two decades.

However, I think Japan continued to make progress on AI in the area of robotics even through the 1980s and we know most of these were to be used in the manufacturing process, in order to compensate for a fast-ageing society. Most of the AI, machine-learning, deep-learning research projects in the US were being funded by the US Federal Government, especially the US Department of Defense. And from what I have read it appears that the attempt through most of the four decades of research that was being conducted in AI was in making computers think like us humans. Whether it be in cognition, understanding of environment and context, reasoning, intuitive thinking and similar aspects of human thought processes, the goal seemed to be to achieve as similar an intelligence as the one we humans possess.

The question to ask is when and how did AI research in the US and elsewhere overstep the line dividing humans and machines? When all AI related research in the early decades were about matching human intelligence and thinking, when and why was it allowed to cross the bounds of what we humans can “humanly” think and do? What was the tipping point? I believe – and I could be wrong – that researchers and companies were getting so carried away by the scope of their inventions and discoveries that in trying to showcase their achievements and their AI prowess, they ended up contributing to the AI phase that started to diminish human intellect and prize machine intelligence more.

This must have happened sometime in the 1990s, as AI experiments and their funding gathered momentum once again in the US. This is when IBM announced its big advance in AI through Deep Blue that was asked to play a game of chess with Gary Kasparov and Deep Blue won. This cemented IBM’s leadership in the development of AI technology, and their next project IBM Watson was developed using the prestigious quiz-show, Jeopardy! From here, the company further developed Watson AI their biggest AI project yet and we are all aware of its capabilities. If you recollect, Steven Spielberg too made a film called AI.

If you notice, these are all moves at popularising AI, attempts to bring AI closer to us and to the human experience. However, if you also think about it, you will realise that these machines or AI bots were all trained on material created and developed by us humans over centuries and decades. And despite all the effort AI researchers made in the early decades trying to see whether machines could think the way humans do, I suspect the methods used to train and teach machines have little in common with the way we humans learn. What’s more, the attempt at making AI superior to us clearly came from also training them through games such as chess. I believe the Chinese and Koreans train their AI models through the East Asian game of Go, for which international competitions are held, according to the book, The Coming Wave by Mustafa Suleyman founder of DeepMind and now with Microsoft; I have written about this book as well on my blog. These aspects of training machines through games such as chess or go, inject a competitive element not just between AI bots but primarily between AI and humans. IBM’s Watson was popularised and sold to us as the machine that knows it all. You can ask it just about any question and the AI bot knows more than we do, or can ever hope to know. Again, it establishes AI superiority over humans. Then, we now have autonomy in AI, which is where general intelligence AI is headed, where machines will no longer need nor depend on human intervention; they will be fully equipped to make decisions on their own.

IBM’s AI juggernaut, Watson; Image: Wikimedia Commons

This is where the lack of any regulation of the industry in the US comes into sharp focus. Where we have allowed an important and critical technology such as AI to become a social media content creation tool and plaything, when it ought to have remained a closely guarded preserve of highly specialized and trained researchers, scientists and engineers, trying to solve some of the world’s most challenging issues. As I have written before, AI has a huge and important role to play in data analysis, medicine discovery, invention of new technologies, clean energy, cybersecurity, and the like.

We are all aware of the harmful effects of AI as well. I have written in previous blog posts about AI being open to misuse, just as any other highly advanced and sophisticated technology, including in defence and cyber-warfare. And I had written that there needs to be a global framework for setting rules and regulations governing AI and its use in various fields, just as the world created a global set of rules for nuclear technology and for telecommunications. I believe that there is something called the Hiroshima Process which I first heard about in a Davos Summit 2024 panel discussion, that has been drafted to regulate AI. An idea first mooted by the G-7 group of countries, it is not only a vague document, I think it is just eyewash, and merely using a name such as Hiroshima doesn’t actually do the task of regulation any justice. In fact, I think it is in poor taste.

But staying with the comparison with nuclear technology, the atom bomb too was invented with the coming together of several scientists in the US in 1940, in what has come to be known as the Manhattan Project. This was prompted by a letter from Albert Einstein to the US President warning that the Nazi regime in Germany could be close to developing an atom bomb. As it turned out, not only did the US develop an atom bomb before the Nazi government did, the country was the first and only one to have ever used it, in retaliation against the Pearl Harbour attack by the Japanese in 1941.

The horrors of nuclear technology and the atom bomb still haunt us, even though plenty of advances have been made in the use of this technology for peaceful purposes such as energy and medicine. Which is why the international community under the UN created a set of rules governing its use. Surely, we are not waiting for a Pearl Harbour moment to strike us in the field of artificial intelligence, even though its dangers are not so apparent as those of the atom bomb were, decades ago.

The need for AI entrepreneurs to be reined in through a child-protection lock is here. It is not a toy or plaything that a few digital wizards and geniuses and even CEOs of tech companies can afford to get carried away by and think that they now hold the strings to how AI technology will benefit human puppets now and in the future. Even if the birth of artificial intelligence in the US was funded by the federal government and the US department of defence for several decades before private investors saw its commercial value, it is now private capital that is being poured into it like there is no tomorrow. That said, these AI startups and companies are all benefitting from large defence contracts handed out by the US government, the latest one going to OpenAI. Incidentally, Open AI’s revenue is said to have doubled over the previous year.  

AI is the new frontier in the economic, technological and military competition taking place around the world. The new arms-race, if you like. This needs to be checked and regulated, as I think it is already going out of control. AI is where the economic competition between US and China manifests itself most and puts the rest of the world at risk, whether from chips bans and supply chain issues, or trade in rare earth minerals, or indeed in the defence capabilities of AI. Even in my industry of advertising and brand communications, AI is posing existential threats while at the same time promising magical transformation!

I think advertising company heads and heads of client organisations and CMOs need to step back, think hard about what their business needs really are, and whether AI is the solution. I am quite clear that the advertising and brand communications business is about providing brand communication, marketing and business solutions to clients, and it doesn’t matter whether AI is part of it or not. It also shouldn’t matter whether Meta is automating its advertising or not; that is not why clients seek the help of advertising agencies. If clients think AI is the magic wand solution, they are free to invest in it in-house and create their own communication.

It is clear that AI entrepreneurs and investors need to be protected from their own inventions and discoveries; they are in serious need of a child protection lock. The world doesn’t need any more silly AI assistants, or tech collaborators, or agents, before these become Frankenstein’s monsters in our lives. What we need is for governments around the world to start doing their job of putting the AI genie back into its bottle and handing it to a closed group of responsible scientists and engineers who will put it to better and more sensible use in the fields of technology, clean-tech and energy transition, medicine discovery, and endeavours of the kind.

Dare I say the global telecommunications framework too is in need of an update, what with Elon Musk’s Starlink satellite communications going global.               

Leave a comment