AI capabilities are on the rise. But so are its perils like deepfake. These technologies are difficult to contain, easily accessible, and capable of causing much harm to individuals, societies, and nations. The proverbial genie is out of the bottle; taming it will take more than government regulations.
The Good and the Bad
If you have not been living under a rock, you have probably heard the words Artificial Intelligence (AI), chatbots, Generative AI, and deepfake in the recent past. AI has been the flavour of the season, with corporate behemoths like Alphabet Inc. (the parent company of Google), Microsoft, the most press-attracting entrant in the race, OpenAI, owned by Sam Altman, and multiple companies backed by the maverick billionaire Elon Musk, are all vying for a piece of the lucrative upcoming technology. In this mad scramble to gain a pole position in what may become the defining technology of the 21st century and, in the process, create the world’s first trillionaire, companies backed by deep pockets and ambitious businesspersons are going all-out to develop new and more advanced algorithms.
These fast-paced technological advancements have created incredible opportunities for businesses to reshape their customer service using technologies like generative AI and opened up new sectors and industries for grabs. Think about self-driving cars! AI-based platforms have tremendously improved efficiency, productivity and helped streamline and automate run-of-the-mill administrative tasks, injecting greater competitiveness and agility into the corporate ecosystem. It is an exciting development, and many believe AI to be as consequential as the invention of electric engines – some have even compared it to the invention of the wheel or fire! Only time will tell.
Much good has come from with the mainstreaming of AI, and its impact is all too visible, from social to economic spheres, from data analysis to deep research, from customer service to managing complex projects. But like anything else, AI, too, has a darker side that has worried governments and institutions, and its all-pervasiveness has left no one untouched. Literally!
Perils of AI: Modi, Taylor Swift, Putin, Zelenskyy!
All hell broke loose when explicit morphed images of pop star Taylor Swift went viral on social media in January of 2024. Millions saw those images online. One of the images was viewed a whopping 47 million times before it was taken down, causing irreparable psychological and reputational damage to the pop icon, prompting the White House to step in and nudge online platforms to exercise caution and self-restraint. After the US government’s terse words, several images were promptly taken down. But the damage was done. “While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people,” White House Press Secretary Karine Jean-Pierre was quoted saying to a leading English news media platform.
The ugly face of AI in deepfake reared its head when the Russia-Ukraine conflict commenced in the early months of 2022. Both the warring sides deployed an array of warfare tactics, misinformation being one of them – which included the liberal use of deepfake videos to create panic and confusion on the other side. Ukrainian President Zelenskyy was seen telling his soldiers to lay down their weapons and surrender in about a minute-long video, which was obviously doctored. Oddly enough, this video not only found ample traction on Ukrainian social media but also found its way to the website of the country’s national television station, Ukraine 24, which later informed that the hackers had shared the fake Zelenskyy message across live television on the ticker, before taking it down shortly. Again, the damage was done.
The all-mighty Russian President Vladimir Putin was not spared either! Deepfake videos of him announcing a peace deal with Ukraine did the rounds on social media, and doctored video game footage to claim that a Ukrainian fighter pilot was smashing Russian jets with deadly accuracy in a ‘Top Gun’ style takedown were some of the many instances where deepfake videos manufactured truth and created panic among the masses.
Closer home, our beloved Prime Minister was seen celebrating Navaratri and dancing to the beats of some good old Garba music, much to the amusement of the netizens. It was later discovered that the video was a deepfake, and the rebuttal came from the very top! PM Modi expressed concern over the misuse of technologies like AI and deepfake and the lack of verifiable measures for the people consuming them on social media. “There is a challenge arising because of Artificial Intelligence and deepFake…a big section of our country has no parallel option for verification…people often end up believing in deepfakes and this will go into a direction of a big challenge…we need to educate people with our programmes about Artificial Intelligence and deepfakes, how it works, what it can do, what all challenges it can bring and whatever can be made out if it…I even saw a video of me doing ‘Garba.’ One line taken out of context can cause tumult,” he said.
AI and Deepfake: A Plethora of Challenges for the Regulators
It’s like a can of worms for the regulators. Think about the challenges that AI and deepfake proffer. Bias and moulding public opinion is one of the key issues. Try and run a write-up that is critical of Ukraine on Grammarly – a premier AI-powered tool that helps improve language accuracy and consistency – and it promptly tells you which side it stands on, even nudging you to check some more ‘information’ on the subject. Whether or not you support or deplore Russia is beyond the point. A deft attempt to mould your opinion is underway!
Users have voiced similar experiences where US-based AI platforms have churned out not-so-favourable information about countries and subject matters that do not suit their strategic narratives—case in point: militancy in Jammu and Kashmir or the Israel-Palestine conflict.
Then there are challenges like the lack of transparency – where the end users seldom know what goes behind creating the information for consumption. AI systems are simply too complex and opaque, which impacts aspects like accountability and trust. Aside from ethical considerations like privacy and autonomy, AI technologies like deepfakes can manipulate public opinion, creating serious repercussions for individuals, organizations, governments, and even nations.
We often come across cases of blackmail and extortion of individuals and organizations by using deepfakes to create compromising or embarrassing content. There are also grey areas like who owns the content that comes out of ChatGPTs of the world and what about copyright?
Addressing these challenges is beyond the capabilities of an organization or even a government. It would take a more concerted, multi-layered approach, which must involve all stakeholders and create a consensus on the way forward for AI and related technologies for the greater good.
What’s Already There and What Could Be!
Federal and state governments across the world have enacted diverse laws around data privacy – such as the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) in the US – that offer frameworks to protect personal data. Several intellectual property laws cover a wide range of issues like copyright, patent, and trademark laws, relevant to the creation of AI and deepfake content.
There is a consensus though that more needs to be done. And initiatives like the AI Summit at Bletchley Park in late 2023, led by the then UK Prime Minister Rishi Sunak and attended by government representatives of 28 countries, including rivals China and the USA, indicate that a coordinated effort to find a lasting solution to the AI problem is finally emerging. Rishi Sunak explained the idea behind the Summit, suggesting that “a serious strategy for AI safety has to begin with engaging all the world’s leading AI powers.” “We must ensure that our shared understanding keeps pace with the rapid deployment and development of AI. That’s why, last week I proposed a truly global expert panel to publish a State of AI Science report.”
Drawing parallels with the global efforts to negate the perils of climate change, he said: “This idea is inspired by the way the Intergovernmental Panel on Climate Change was set up to reach international science consensus. With the support of the UN Secretary-General, every country has committed to nominate experts. For the first time ever, we have brought together CEOs of world-leading AI companies, with countries most advanced in using it, and representatives from across academia and civil society. And while this was only the beginning of the conversation, I believe the achievements of this summit will tip the balance in favour of humanity. Because they show we have both the political will and the capability to control this technology and secure its benefits for the long term. Our first step was to have an open and inclusive conversation to seek that shared understanding.”
One of the important ways to address the AI conundrum is international cooperation in developing global standards and best practices for AI and deepfake governance. Governments need to promulgate and enforce ethical guidelines to regulate the development and use of AI and deepfakes. There is also a pressing need to democratize data to improve data quality and diversity and address information bias. Ultimately, the more data AI algorithms are exposed to, the greater the chances of it offering realistic information.
Technology Will Always Outpace Legislation: Then What?
AI and computer technologies have come a long way since its early days in the 1940s. Remember Alan Turing, brilliantly portrayed by Benedict Cumberbatch in the movie ‘The Imitation Game,’ who broke the Enigma code and helped the Allied Forces defeat Nazi Germany with some help from Polish scientists? Since then, governments have played catch up to technology, and it will likely remain so. But a knee-jerk reaction to such issues will do more harm than good. It is important that governments take a holistic approach and seek advice from technology experts and make them equal partners rather than looking for solutions from within.
New capabilities are emerging at a frantic pace. We will likely witness many more ground-breaking developments in the AI domain in the next few years, given the quantum of resources that companies are spending on creating them. The question is: will they do more good than harm? The answer lies in how we cooperate internationally as a community to find amicable and lasting solutions to ensure that such technologies enable our citizens and not lead to some dystopian future as George Orwell envisaged in his seminal work ‘1984.’ The clock is ticking!
ABOUT THE AUTHOR
Shashank Shekhar is a young journalist, a regular columnist with DI Conversations, writes on politics and current affairs.