The RIGSS Blog

To stimulate analysis, innovation, and forward thinking, and generate new ideas and insight
on subjects that matter in 21st Century Bhutan.
A humble tribute to celebrate learning, leadership and service that His Majesty The King continues to champion.

Launched on 21st February 2021 to commemorate the 41st Birthday of His Majesty The King

DISCLAIMER:
The views and opinions expressed in the articles on the RIGSS Blog are that of the authors and do not represent the views of the institute.

DEMOCRACY IN THE AGE OF AI

POSTED ON July 25, 2024
Yeshey Ohm Dhendup
Research Officer, RIGSS

We are living in an age where Artificial Intelligence (AI) is revolutionising our everyday lives, transforming our economies and reshaping our social interactions. But what really is this AI that has become a topic of conversation everywhere, from dinner tables to public forums? Stanford Professor John McCarthy, who first coined the term in 1955, defined AI as “the science and engineering of making intelligent machines”. AI, in its most basic sense, is the machine’s ability to think and act like human beings. From IBM’s Deep Blue machine defeating world champion Garry Kasparov at chess in 1997 to the launch of Open AI’s ChatGPT in 2022, AI has become the “most powerful tool that humans have created”. The advent of AI has brought about innumerable opportunities for human beings, allowing us to not just automate repetitive tasks but also provide innovative solutions to address global problems such as climate change. However, AI also poses critical challenges to humanity, most of which stem from our limited understanding of AI and our inability to keep pace with its rapid advancement. While the proliferation of AI presents implications for a wide spectrum of social constructs, the risk to democracy, an institution that fundamentally relies on the principles of equality and access to information, is even greater.

AI is redefining democracy, for better or for worse. Hence, it has become pertinent for democracies worldwide to put in place mechanisms to address these changes. Bhutan officially became a Democratic Constitutional Monarchy in 2008, with around 79.38 per cent of voters participating in the first National Assembly elections in March, followed by the enactment of the Constitution of Bhutan in July of the same year. Bhutan underwent a unique transition into democracy; unlike many nations where democracy was demanded by the people, in Bhutan, democracy was actually enforced by our far-sighted and visionary monarchs. Democracy has been running smoothly in Bhutan for the past 16 years, with the fourth democratically elected government assuming office just last “January. However, with more and more Bhutanese going online, “fake news on social media has already become a major challenge” for our political system. According to a Kuensel article, as of 2023, around 90 per cent of Bhutanese were active on at least one social media platform.

Disinformation, fake news and hate speech that builds on “fear, insecurity, societal divisions, and ideological polarisation” are some of the greatest threats to democracy as they cloud people’s judgement and, thereby, their ability to make well-informed decisions. The most common AI applied in spreading misinformation is Deepfake, a tool that can mimic a person’s voice and facial expressions. Deepfake technology has been utilised in countries elsewhere to misinform voters by making politicians say things they never actually said. India, the largest democracy in the world that recently went to the polls, grappled with the widespread use of Deepfake. For instance, videos of two popular Bollywood stars Ranveer Singh and Aamir Khan campaigning for the Congress party were widely circulated, which were later claimed to be AI-generated. Similarly, another video of Home Minister Amit Shah advocating for the abolition of quotas for Scheduled Castes and Tribes was also deemed fabricated. Other content such as pornographic images, audio files and video recordings created to defame the politicians were also largely disseminated.

AI-powered automated social media accounts that are programmed to act and converse like humans, often referred to as bots, are another example of how AI is being used for malice. While these bots are typically designed to spread harmless information, such as weather forecasts and advertisements, they are increasingly being created to deceive people online. According to Freedom House, bots were employed in at least 16 counties to “sow doubt, smear opponents, or influence public debate”. Such widespread use of AI to manipulate political narratives has given rise to a phenomenon called the “liar’s dividend,” whereby the ubiquitousness of false news undermines public confidence in actual facts. In April 2023, the circulation of audio clips of the then finance minister of Tamil Nadu accusing his party members sparked controversy in the State. Though he denounced the recordings as AI-generated, independent deepfake experts reported that portions of the tapes were real. Such practices can potentially disrupt public trust in democracy and its establishments. AI has also enabled governments to enhance their online censorship. Governments and political parties alike have been employing AI to “remove disfavoured political, social and religious speech,” thus broadcasting so-called half-baked information to the public.

AI-powered tools are also increasingly being deployed by governments to conduct mass surveillance without public consent, raising privacy and human right concerns. Though such monitoring and tracking systems can deter crime and disorder in society, they also allow governments to exert excess power over their citizens, thus risking the rise of authoritarian regimes. Advancements in AI and computing powers have also made it easier for cyber attackers to breach cyber defences, giving rise to cybercrimes. Such tools not only pose risks of election hacking but also expose government databases to potential misuse and exploitation. Cyberattacks and malware can also be employed to spread political and ideological propaganda. Apart from misinformation, the two biggest perils to democracy in this age of AI are, perhaps, those concerning the digital divide and power concentration. Billions of people around the world still cannot afford digital devices, let alone have the know-how to use these AI tools. In addition, only a handful of companies and governments have the resources to develop, train and deploy these systems. Microsoft, Google and Amazon together control over two-thirds of the global AI systems, giving them extensive power to steer the global AI landscape.

That being said, AI also holds immense potential to enhance democratic institutions by ensuring inclusivity, enhancing government services and promoting transparency. An example of how AI can be adopted by governments to promote inclusivity and enhance service delivery is Bhashini, an Indian government-owned AI-powered translation tool. Launched in 2022, Bhashini offers translation services to 22 scheduled languages to ensure seamless digital service delivery by “bridging the language and digital divide”. Prime Minister Modi also used this tool during his campaigns for the recent elections for real-time language translation. AI tools can be leveraged to educate and update citizens on different policy issues which can further help them make informed political arguments and decisions. In addition, citizen engagement platforms such as custom AI-powered Chatbots and Large Language Models (LLM) can be developed to address citizen queries and concerns, be it to seek guidance on electoral procedures or to seek clarifications from the candidates.

Moreover, AI can be used by election administrators to ensure transparency and fair play during elections. In 2022, an AI-driven online hate speech detector was launched in Kenya to counter misinformation and hate speech prior to their general elections. The tool summarised social media content in English, Kiswahili and Sheng, providing real-time data for timely interventions. According to an EU report, AI will also play a crucial role in policymaking by enabling policymakers to easily identify societal issues through fast data analysis while at the same time providing information to deepen their expertise on these topics. AI can also enhance the well-being of the people by offering innovative solutions to counter social problems such as corruption by tracking illegal financial flows and mitigating natural disasters by predicting potential climate threats.

The ever-evolving AI sector presents a promising future for humankind; however, it has become imperative to make our democratic systems future-proof and AI-proof. As Bruce Schneier, a public policy lecturer at the Harvard Kennedy School, aptly states, “Our democratic systems have not evolved at the same pace that technologies have”. Hence, to minimise the probable risk that AI presents while maximising its benefits, governments should put in place strong, robust and adaptive regulations to govern its use, especially within the democratic context. This need is even more urgent in Bhutan, where democracy is still at its nascent stage, making us particularly vulnerable. Although not in the political sphere, Bhutanese have fallen victim to online scams using Deepfake technologies, and it may not be long before we see such conduct being replicated for political purposes. Moreover, as technology advances, it is reportedly becoming more and more challenging to differentiate between actual and AI-generated content. Therefore, instating clear guidelines for the use of AI to ensure transparency, accountability and protection of privacy has become imperative.

An effective way to curb the use of AI for misinformation, especially during the electoral process, would be to put in place AI detection tools and to ban the use of AI for political campaigns. For instance, the Federal Communications Commission of the United States outlawed the use of robocalls using AI-generated voices earlier this year. This measure was enacted to curb the dissemination of fake news leading to the upcoming elections. Further, providing AI and digital education to our citizens can help empower them to engage responsibly and critically with these technologies. To curb the negative implications of AI, Taiwan applied the pre-bunking method to educate their people, whereby videos of politicians being deepfaked along with ways to do it were widely circulated. This, according to Audrey Tang, Taiwan’s minister of Digital Affairs, helped build ‘inoculations’ in the people, making them resistant to fake news during their 2024 elections. Further, UNESCO’s report on Artificial Intelligence and Democracy highlights the need for a comprehensive national strategy on digitalisation and artificial intelligence. Such policies, according to the report, should be formulated and implemented through a multi-stakeholder approach to ensure that AI tools are applied to enhance rather than undermine democracy.

Countries must find innovative mechanisms to govern AI to harness its transformative advantages while safeguarding their democratic principles. While mapping out clear and stringent regulations is a step towards it, governments must also ensure that these rules do not hinder creativity and leave enough room for innovation to flourish. Helen Toner, a former board member of Open AI, in her TED talk on “How to Govern AI”, underscores three key strategies to achieve a balanced approach. Firstly, governments must focus on adaptability over certainty while designing policies to enable them to anticipate and respond to AI developments. Secondly, companies should be mandated to disclose information about the technology they are building and their use cases. These companies, she highlights, should allow external auditors to scrutinise and give feedback on their work. And lastly, incident reporting mechanisms should be established to allow governments to collect data on AI-related issues for informed problem-solving and decision-making. Thus, by implementing adaptive regulations, fostering transparency and encouraging responsible and ethical use of AI, countries can harness its potential to make their democratic institutions resilient and ready for the AI-driven world.

Others

WHAT MORE CAN BHUTAN DO IN MITIGATING CLIMATE CHANGE?

POSTED ON April 18, 2021
Dechen Rabgyal
Masters Student, LSE, Former Asst. Integrity Officer, ACC

DIRECTOR’S NEW YEAR MESSAGE

POSTED ON January 01, 2022
Chewang Rinzin
Director, RIGSS

DOING OUR PART

POSTED ON April 02, 2023
Sonam Wangchuk
Dy. Chief Education Officer

CHINA'S RURAL REVITALISATION STRATEGY AND POSSIBLE LESSONS FOR BHUTAN

POSTED ON February 09, 2024
Yeshey Ohm Dhendup
Research Officer, RIGSS

AGRI-PRENEURSHIP IN BHUTAN

POSTED ON March 04, 2024
Dorji
RIGSS Alumnus, SELP-5

GONGSA UGYEN WANGCHUCK, SERKONG DORJE CHANG AND THE PURE VISION OF ZHABDRUNG RINPOCHE

POSTED ON April 18, 2024
Dasho Dr. Sonam Kinga
Research Fellow, RIGSS