On December 23, 2024, the Center for Media Research – Nepal (CMR-Nepal) organized a panel discussion on “Combating Misinformation in the Age of Artificial Intelligence (AI)” during the Kathmandu Conference on Communication and Media in Kathmandu. Here is the summary report:
Moderator: Ms. Namrata Sharma
Panelists: Mr. Rewati Sapkota (Communication Registrar, Bagmati Province), Mr. Deepak Adhikari (editor, Nepal Check) and Ms. Prema Thapa (International Republic Institute, Nepal)
Panel Summary
The panel discussed the increasing problems with disinformation in the digital era and how AI can both propagate and counteract false narratives. AI can help fact-check and identify corrupted content but can also be used to spread false information. AI-generated content can be difficult to distinguish from authentic material, though visual errors can sometimes reveal falsehoods. Media literacy and journalism are crucial for verifying and contextualizing information, while public education on media literacy is essential. Social media platforms are the main conduits for false information, and reducing dissemination requires user awareness campaigns, ethical content-sharing norms, and regulatory actions.
AI has become a powerful tool in combating misinformation, providing resources to fact-check and debunk content while also accelerating the spread of misleading information. Tools like ChatGPT have transformed the way information is digested and transmitted, but they can also be misused to produce and spread false information. AI-generated information has made it difficult to distinguish between authentic and fake content, leading to increased disinformation in areas like politics, health, and local communities. Media literacy and journalist training are crucial in slowing the spread of misinformation, while policy-level measures like regulating content distribution and incorporating media literacy into educational institutions are needed.
Panel Recommendations
The panel recommended:
- Investing in AI-driven fact-checking tools
- Developing comprehensive media literacy programs
- Regulating content sharing on social media platforms
- Encouraging ethical content sharing practices
- Strengthening collaboration between journalists, policymakers, and tech developers to combat misinformation effectively
Summary of Individual Panelist Statements
Deepak Adhikari: “Let me talk about my professional experience. I worked for two years as a member of South Asian Check during my early career. Due to financial constraints, our group later established NepalCheck.org, where we verified facts in both English and Nepali. Plane crashes, which were noteworthy occurrences at the time, were one of the main subjects we covered. AI wasn’t available to help us back then, but it’s now available and has revolutionized our work. We’ve moved from fact-checking false information during the TV era to social media, and from false information about COVID-19 to the current period.

An AI-generated image that went viral from Jajarkot was one of our first significant fact-checks employing AI. We must consider new technology to be a double-edged sword, with both advantages and disadvantages. Although AI has simplified the process of creating and monetizing content, humans still need to use our minds since AI is not perfect, particularly when it comes to specifics like faces and colors. Human intelligence is still essential to combating misinformation produced by AI.
A power imbalance exists. Whoever is strong always targets the weak. Social media just serves to magnify the discrimination in places like untouchability, but we only see the surface effects rather than the underlying causes. Instead, we perceive just the symptoms when we examine the structure.”
Prema Thapa: “Yes, AI presents both challenges and opportunities,” she said in response to the discussion. “Talking about information integrity, COVID-19 brought a lot of challenges,” she continued, reflecting on recent events. “The provincial-level lawmakers have their own understanding of any changes,” she added, highlighting the perspectives of those legislators.

“The IRI report, Democracy in the Age of AI, clearly outlines the opportunities and challenges we face,” she remarked, referring to a recent study. “In the past, disinformation was a major problem, and during our policy dialogues, we discovered that both disinformation and misinformation were consumed locally,” she added. “Disinformation is not limited to one sector; we have seen its impact in health and politics,” she said, highlighting the problem’s pervasiveness.
“To address the situation, we need to be group-specific in our approach to tackling disinformation,” she concluded. “At the policy level, there is a significant gap between debate and the actual implementation of all proposed initiatives. Having said that, political parties are interested as they are both users and victims as well.”
Rewati Sapkota: “As consumers, we must emphasize media literacy because disinformation has grown to concerning levels in this era of artificial intelligence. Public awareness is necessary to help us, the audience, recognize and counteract false information. It’s important to keep in mind that we are technology users rather than necessarily inventors, and that distinction calls for vigilance.

For policymaking to successfully address these issues, a long-term perspective is required. Additionally, journalists need to be trained to ensure credibility and fact-checking, as their role is more important than ever. Although social media has emerged as a major information source, it is also a place where accurate information can increase ad revenue. By taking advantage of this, we can establish sustainable media ecosystems that tackle economic issues in addition to combating disinformation.
Together, let’s create a future in which media literacy becomes a pillar of our society and the truth prevails.”
Question and Answer Session
Questions Asked
Sherman Sharma (Herd International): “Misinformation has grown alongside the use of AI. How can we stop it? In the context of ten years, what is the level of digital literacy in the digital age?”
Baburam Acharya (CDJMC): “How are social media and AI perceived negatively, and how can the government change the way people view social media? How can we use social media, like foreign leaders such as Trump, to reach an audience effectively?”
Rudra Khadka (Journalist): “Is AI beneficial, and how can we use it for the better?”
Aditi Sharma (Madanbhandari Memorial College): “How do AI-generated images become viral? How can we be aware of things in advance, such as celebrity photos of women, LGBTIQ individuals, or men using viral images on social media? How can we ensure these are checked by authorities?”
Bibek Pageni (Butwal): “How has false information impacted mainstream media? Why aren’t there specific requirements for children and the elderly to comply with laws regarding misinformation? Why isn’t this addressed by a specific department?”
Unnamed Participant: “Why have we left open access to false information and digital literacy? Where are the control mechanisms?”

Panelist Responses
Rewati Sapkota’s Reply
“I appreciate you bringing up these important topics. Indeed, control is a global and local problem that affects more than just the media. At the provincial level, there are currently no effective mechanisms in place to address digital literacy and misinformation, and priorities have shifted.
To address this, we must give top priority to raising digital literacy through public service announcements and the development of easily accessible educational materials, like general-language books. We must also change the way we think about digital devices, emphasizing their advantages over their drawbacks.
Regarding the establishment of a specialized department, I admit that we are just in the planning stages, but things are moving forward. To ensure a more efficient and successful approach, we are actively exploring the integration of data systems, much like how citizenship data is managed. We’re working on developing a framework to help with the curation and public release of some data. The department will be established, but it will take rigorous preparation and teamwork. Let’s continue this conversation and collaborate to create a system that empowers people, fights false information, and promotes a digitally aware culture.”
Deepak Adhikari’s Reply
“The dual nature of technology—it has both advantages and disadvantages. You are entirely correct; in order to provide students the tools they need to appropriately navigate the digital world, we must intervene in education at all levels, beginning with primary school. Teaching children to evaluate information, develop critical thinking, and use technology as a tool or extension of their brains instead of being overly reliant on it is part of this.
You make a valid point when you say it’s important to know how and where our content gets spread. People need to be taught the value of creating secure passwords, understanding privacy settings, and exercising caution when disclosing personal information online. We require public awareness campaigns, cybersecurity workshops, and comprehensive educational initiatives that incorporate digital literacy into the curriculum in order to solve these issues. By doing this, we can reduce the dangers associated with technology while enabling people to use it as a tool for innovation and growth. Together, let’s build a future in which everyone is prepared to use technology responsibly and it becomes a friend rather than a threat.”
Prema Thapa’s Reply
“It is crucial to address AI’s role in combating misinformation. Through low-cost, mass communication, artificial intelligence (AI) can be a potent weapon in the fight against disinformation, allowing us to quickly spread factual information and disprove myths. To prevent unforeseen consequences, we must, however, also ensure AI is applied ethically and responsibly.
It’s crucial that traditional media be strengthened, as you mentioned. Since traditional media outlets have a long history of being reliable, we should support them in order to maintain their position as trustworthy information providers. At the same time, we must focus on policy frameworks that tackle the problems that AI and digital media present. This involves drafting laws that support accountability, transparency, and fact-checking.
Fact-checking is an essential component of this task. To ensure that information is accurate before it spreads, we need to invest in robust human and AI-driven fact-checking systems. We can build a more knowledgeable and resilient society by combining the advantages of AI tools, governmental initiatives, and traditional media. While being cautious about its possible misuse, let’s embrace AI as a force for good.”
Moderator’s Closing Remarks
Moderator Namrata Sharma summarized the day’s topics and underlined the importance of journalism in the age of rapidly evolving technology in her concluding remarks for the conference. She emphasized that technology is here to stay, whether we like it or not, and it is our duty to use it for the benefit of society.

She emphasized how journalism can be used as a means of bridging the divide between the “haves” and the “have-nots.” From far-flung locations like Humla and Jumla to global concerns like climate change, journalists must prioritize increasing investigative journalism, holding politicians accountable, and boosting productivity. She reminded the audience that accuracy and honesty are at the core of journalism. By incorporating technology into journalism, we can raise important concerns like social inequality and climate change while simultaneously challenging established power structures. She asked the audience a number of challenging questions that made them consider how journalism might remain true to its core values while embracing the digital age.
Ms. Sharma praised the speakers, organizers, and attendees for their contributions to the stimulating conversation in her closing remarks. She said she hoped the conference’s ideas and solutions would spur practical actions toward a society that is more educated, compassionate, and digitally aware.