As Papua New Guinea prepares for the 2027 national elections, concerns are mounting over the role of artificial intelligence in shaping public opinion.
At a panel discussion during the PNG Media Summit 2026 in Port Moresby city today, specialists warned that AI-generated misinformation and deep fakes could pose serious risks to both electoral integrity and public trust.
Moderated by veteran journalist Scott Waide, a panel of specialists, including Steven Matainaho, Secretary for the PNG Department of Information and Communications Technology; Michael Hseah, Non-Resident Fellow at Stanford University’s Center for International Security and Cooperation; and Craig McCosker, Product Strategy Manager at the Australian Broadcasting Corporation, spoke on both the dangers and potential benefits of AI for media and governance.
Waide opened the discussion by noting growing fears that AI-generated content could flood digital platforms during the election cycle, potentially influencing voter perception.
Hseah assured that this is not a challenge unique to Papua New Guinea or the Pacific, but a global concern affecting democracies worldwide.
He also stressed how AI could be used to enhance quality journalism, enabling a single well-researched story to be adapted on multiple channels.
“AI can take one well-researched story, verified and edited carefully, and share it across radio, TV, print, online, and multiple languages; thus, making the work of one journalist reach as many people as a thousand.”
Yet the same technology could also be exploited to generate large volumes of misleading content in minutes.
The panel also discussed the production of AI-driven propaganda videos globally, including very viral material from conflict zones such as Iran.
Hseah described this shift as part of “a thousand industrial revolutions,” where the cost of producing content “made by the mind” has fallen nearly to zero.
“We cannot pretend we live in the old world anymore.”
Matainaho stressed the need for strong governance around AI and data. He called for regulations to ensure data protection, accountability, and cultural context are considered in AI distribution.
“When you input information into platforms like ChatGPT, whose data are you pushing in, where is it being processed, and how is it being used?”
He emphasized that pacific nations often rely on AI models developed overseas, which may poorly represent local cultures and knowledge.
Closing the session, the panel agreed that AI is both a powerful tool and a serious threat.
McCosker added that media organizations must leverage AI’s speed and creativity to counter misinformation and understand colloquial speech while maintaining honesty in their journalism work.
Hseah concluded with a cautionary note: “The deeper question is not just what this technology does, but what values go into how we use it.”