This website uses cookies and other technologies to help us provide you with better content and customized services. If you want to continue to enjoy this website’s content, please agree to our use of cookies. For more information on cookies and their use, please see our latest Privacy Policy.

Accept

cwlogo

切換側邊選單 切換搜尋選單

Generative AI and disinformation - Assessing EU policies and Taiwan policy recommendations

Generative AI and disinformation - Assessing EU policies and Taiwan policy recommendations

Source:shutterstock

Analyzing the 2024 presidential election, this op-ed illuminates Taiwan's susceptibility to AI-driven disinformation campaigns and the challenges in crafting effective regulatory frameworks. Drawing parallels with the European Union's landmark policies, it underscores the broader geopolitical significance of addressing the dynamics between technology, misinformation, and democratic resilience.

Views

528
Share

Generative AI and disinformation - Assessing EU policies and Taiwan policy recommendations

By Irene Chou, Tatiana Van den Haute
web only

Back in March 2023, co-founders of the Center for Humane Technology Tristan Harris and Aza Raskin presented the “AI Dilemma,” where they discussed existential risks posed by the rapid dissemination of artificial intelligence (AI) in our society.  One of their proclaimed “three rules of technology” was that “when you invent a new piece of technology, you uncover a new class of responsibilities.”

So far, the frenzied pace at which generative AI has been dispersed amongst populations has left far too much yet to be regulated, raising alarming questions of safety and oversight at national and global scales.  With all its potential for good, AI’s capacity for adverse effects is not lost on researchers: the World Economic Forum, for instance, has named AI, along with misinformation, disinformation and cyber insecurity, in the top 10 global risks ranked by severity over the next decade.

Generative AI and Disinformation

As generative AI becomes more sophisticated and accessible to the public, the delicate balance between a healthy democracy and the protection of public interest against disinformation must be addressed. A key concern lies in the “liar’s dividend,” a phenomenon in which anticipation of prevailing disinformation results in public distrust of reliable news sources and skepticism in facts. In a democratic system where civic participation by way of opinion-sharing is essential, disinformation that corrodes participation in public discourse must undergo sufficient regulation.

Taiwan is a prime target of disinformation attacks, more and more of which are being perpetrated using AI.  Taiwan AI Labs, a pioneering non-profit contributor to analyses on Chinese AI-generated election interference released the "2024 Taiwan Presidential Election Online Information Manipulation Analysis Report,” the first-ever analysis of election manipulation conducted using large language models and multiple AI models.

The report highlighted the severity of information manipulation throughout Taiwan’s election season, resulting in false dramatizations and deliberate polarization of Taiwanese civil society, with one of the analysis results demonstrating that one in three messages on the presidential website belonged to non-native troll accounts, who entrenched themselves in social media groups and news channels and disseminated false information both in English and Mandarin.

As such, legal regulatory frameworks should be human-centered, with an emphasis on ethical practices that protect society against AI-generated disinformation.

In an exclusive interview with Taiwan AI Labs Founder Ethan Tu (杜奕瑾), he highlights the rationale behind AI and digital media regulation – countries should “think about protecting humans, instead of protecting human manipulation.” In the face of global information manipulation, the European Union’s landmark series of legislation on AI and digital media provides a blueprint against disinformation and misinformation that is invaluable to future Taiwanese policies.

The EU’s Digital Services Act: A Primary Step in Protecting Citizens’ Fundamental Rights and Balancing the Market

While the rest of the world falls progressively behind on forming strategies to address risks posed by AI, the EU has made a decidedly robust effort to create legislation that addresses generative AI and all technological advancements that came before it.  The first, known as the Digital Services Act (DSA) along with the Digital Markets Act (DMA), were set out in order to secure digital spaces, protect fundamental rights of its users, and establish a regulated and level playing field in the European Single Market and worldwide.  While the DSA deals with online intermediaries (e.g. online marketplaces, social networks, app stores, content-sharing platforms…), the DMA designates 'gatekeeper' online platforms – namely Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft – which will follow its respective obligations.  Both the DSA and DMA emerged in the context of a globally evolving and increasingly influential digital landscape that does not have a corresponding legal framework with the updated protections necessary to keep digital services in line with fundamental human rights.

In its impact analysis, the DSA established a set of due diligence obligations including advertising transparency, clear terms and conditions including respect for fundamental rights, know your business customer, etc. Notably, it also includes asymmetric measures with enhanced responsibilities for Very Large Online Platforms (VLOPs) (over 45 million users per month) to mitigate systemic risks where the biggest audiences are reached and the most severe harms are potentially caused and supervised by an EU Board. Although the DSA officially becomes applicable starting next month, platforms have already started altering their systems to adhere to the legislation, with certain impact stories already noticeable. The DSA is thus instrumental in ensuring a consistent, proportionate and effective regulatory framework for digital services across the EU.

For instance, the DSA mandates that VLOPs and Very Large Online Search Engines (VLOSEs) must identify, analyze, and mitigate risks related to electoral processes and civic discourse, while safeguarding freedom of expression. The implementation of the DSA was put to the test during the Slovak parliamentary elections on September 30, 2023, as VLOPs and VLOSEs were required to comply with the new regulations. Due to the DSA, there has been a notable shift in how these providers address electoral integrity. They have improved their response times to flagged content, established clearer escalation processes for handling disinformation and misinformation, enhanced fact-checking capabilities, and allocated more resources to address these issues.

Beyond helping to safeguard democracy, the DSA is also expected to mitigate targeted advertising toward minors, ensure greater transparency and control, and empower more efficient and enforced reporting of illegal content and illegal goods in online marketplaces.

The EU’s AI Act: A Novel and Future-Facing Legal Framework

While the benefits of such regulation are far-reaching, the DSA and DMA only really address the digital landscape as it was before the advent of generative, multi-modal AI.  The European Commission first released a full AI Act (AIA) proposal in April 2021, and it is expected to be finally adopted in the coming weeks.  Most obligations will be enforceable within 24 months, specifically by early 2026. However, the ban on prohibited use cases will take effect earlier, within six months, and the obligations related to foundation models and general-purpose AI will also be binding earlier, within 12 months.  The final deal, reached in December 2023 after long negotiations spanning the EU Commission, Parliament, and Council, purports “to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field.”

To give a brief overview, the AIA adopts a risk-based approach to AI regulation, classifying AI systems into four tiers of risk, each carrying its own set of respective regulations: ‘unacceptable’ risk, ‘high-risk,’ ‘limited’ risk, and minimal risk.  AI systems under the first category, falling under ‘unacceptable risk’, are viewed as contravening EU values and a clear threat to fundamental rights.  These include, but are not limited to, biometric identification systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race…), facial recognition databases, AI systems that manipulate human behavior or exploit situational vulnerabilities of people, etc.  The only possible exception to the ban would be the use of biometric identification systems in publicly accessible spaces for law enforcement, and even then, with prior judicial authorization.

‘High-risk’ systems are defined as having “significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law,” and include medical devices, critical gas, water and electricity infrastructures, educational or professional recruitment systems, etc. These have a comprehensive set of mandatory compliance obligations such as conformity assessments, mandatory fundamental rights impact assessments and rights of citizens to launch complaints about them and receive explanations - making it one of the first contexts on the globe where citizens can launch complaints about AI systems being used and have a right to the explanation of the respective system’s conclusions. ‘Limited risk’ systems, in turn, have only more minimal transparency obligations, while systems classified as ‘minimal risk’ are allowed free use and voluntary codes of conduct.

The AIA is most notable in providing a new legal paradigm for dealing with the rapid advancement of artificial intelligence.  It puts a heavy onus on companies themselves to conduct their own risk analysis: for instance, all organizations offering essential services (such as insurance and banks) must conduct impact assessments on how using AI systems will affect users’ fundamental rights.  Like the DSA, this legislation grows in proportion to the AI model used, the more powerful models having additional requirements of sharing how secure and energy efficient their models are.

The AIA’s governance mechanism is also to include an independent panel of experts to offer guidance on the wider, systemic risks that AI poses.  Working in tandem with the legislation on a company-by-company level, this may help in keeping a realistic check on AI models disseminated even as the technology inevitably progresses at an exponential scale.  One only has to look at the steep fines for noncompliance, which range from 1.5% to 7% of a firm’s global sales turnover, to understand the gravity and significance of this legislation.  While the EU will have to maintain a sharp and progressive eye on AI developments, the AIA is a pioneer in indeed attempting to cover the ‘new class of responsibilities’, as Harris and Raskin had put it, that come with this new technology.

Assessing Taiwan’s Regulatory Culture 

The EU’s regulatory sandbox, for which providers and deployers of AI can test out innovative products, services, or businesses in a controlled environment, is especially pertinent to Taiwan’s digital media space. As was exemplified throughout this election, when it comes to cognitive warfare and election interference, Taiwan is ground zero against China’s fast-improving disinformation tactics. Following the classic Russian disinformation strategy, China ultimately seeks to weaken Taiwanese people’s trust in their democratic institutions by sowing discord and uncertainty. As generative AI becomes smarter and cheaper, Taiwan’s vulnerability would only be further exploited. AI’s exponential growth calls for regulations that evolve in tandem with technological advancement. Still, tensions between democratic governance and free speech remain a challenge that Taiwan must overcome to establish a much-needed regulatory framework centering on generative AI.

Notably, the regulatory culture in Taiwan shows a well-founded hesitance towards any form of regulation on online content, which makes drafting comprehensive legislation to regulate AI even more difficult. Historically, Taiwan’s political evolution is marred by autocracy, martial law, and the specter of white terror. As a result, the island’s current democratic system embraces freedom of speech, freedom of the press, and freedom of association.

The highly deregulated commercialized media ecosystem therefore allows Chinese influence to take the form of financial control among more traditional mass media outlets, co-opting Taiwanese media groups to create what is known as “red media.” Under the auspices of Chinese funding, Want Want China Times Media Group, for instance, acquired China Times, Want Daily, CTi TV, and CTV. News coverage included instructions on narrative framing and self-censorship to ensure a positive portrayal of China.

The passage of the 2019 Anti-Infiltration Act shows Taiwan’s determined response in combating red media, but also belies the deregulated nature of online digital media. The law states that any person or entity receiving support from “overseas hostile forces” would face up to five years of imprisonment and sizable fines. Remedial effects against Chinese influence were immediate. Master Chain, the only Taiwanese media outlet with China-based offices, terminated operations in Taiwan. The widely watched CTi TV was fined NT$1 million (US$32,409) for broadcasting unverified news and closed out after having its broadcast license revoked. Yet, while television and radio broadcast news media are governed by the Taiwan National Communications Commission (NCC) for content accuracy, there is little regulation of online journalism. When CTi TV shut down, the media outlet just moved online.

In 2022, the NCC proposed the draft Digital Intermediary Service Act (DISA) in order to regulate online digital media space. Modeled after the EU’s DSA, the draft bill would regulate digital intermediary service providers, with the central aims of transparency, user right protection, and removal of disinformation. “Digital intermediary service providers,” or providers of Internet connection, caching, and data storage services, would be required to show the names and contact information of representatives in Taiwan or a local agent that represents overseas service providers. Service providers would submit an annual transparency report, including consumer complaints and other essential data, and remove disinformation or illegal content when a court-issued restraining order is received.

Similar to the DSA’s asymmetric due diligence requirements, the draft DISA identified “designated online platform operators,” or online platforms with more than 2.3 million active domestic users. For these operators including YouTube, Facebook, Yahoo Auctions and DCard, additional requirements include displaying recommendation algorithms in user service agreements, removing illegal content when they are notified to do so, and alerting users when their content has been taken down. If platforms contravened, they would face fines between NT$1 million and NT$10 million (US$33,690 and US$336,904).

However, while the proposal would streamline the removal process for disinformation and illegal content, policy opponents argued that the draft act was administrative overreach. DISA would allow government ministries themselves to determine what counts as invalid content, such as medical or economic disinformation. This is an administrative mechanism that government offices have little experience with. When content moderation was seen by the Taiwanese public as government censorship, the draft legislation came to a standstill. DISA was eventually abandoned due to criticism that the law was unconstitutional and impinged on the freedom of speech. In short, content moderation goes against Taiwan’s regulatory culture, reminding wary citizens of an authoritarian past the island had worked hard to put behind.

A Regulatory Framework that Addresses Generative AI

Nonetheless, according to Taiwan AI Labs, China’s barrage of disinformation has intensified significantly on social media platforms like TikTok and YouTube. Critically, as Tu points out, debunking misleading narratives built by large language models is useless. Every attempt at debunking would only trigger a debunking on top of the debunking statement made, causing a cyclical process that facilitates domination of Chinese influence in Taiwanese news media. Moreover, AI has become the perfect tool to quickly, cheaply garner foreign language capabilities and cross-cultural understanding that Chinese disinformation campaigns often lack. Leaving the digital media space without legislative supervision would only enable rapidly advancing information manipulation.

According to the government policy nicknamed “AI Taiwan Action Plan 2.0,” which delineates action items from 2023 to 2026, Taiwan seeks to prioritize AI ethics and legislation and foster trustworthy AI development. To do so, digital media policy should address how generative AI enables and disseminates disinformation. Given the Taiwanese regulatory culture and the EU’s precedents in the DSA and the AIA, a newly revised bill after the draft DISA needs to include asymmetric due diligence, decentralization of authority in digital media, and online platform transparency.

Firstly, like the DSA, asymmetric due diligence would prevent the stymieing of AI innovation on smaller digital intermediary service providers. The AIA’s risk-based categorization accompanied by continuous, periodic audits by third parties should be adopted to maintain online platforms’ accountability.

Secondly, unlike the rescinded DISA, the power to define disinformation must be shifted from government ministries to independent third-party review boards overseen by judicial authorities and the public. As such, illegal content and disinformation can be expeditiously removed, but with the right precautionary approach that prevents administrative overreach.

Thirdly, in democratic AI governance, transparency is key to ensuring the fundamental rights of online users. Online users must be informed of platforms’ recommendation algorithms and how advertisements are placed, and citizens’ complaints against digital online platforms should be directly heard and assessed.

Lastly, mediating online disinformation requires cross-sector collaboration.  Much can be learned from the DSA’s lengthy deliberation process whereby they conducted in-depth impact analyses of multiple policy options along the lines of economic impact and protection of fundamental rights, whilst taking input from stakeholders at all levels along the way.

As Digital Minister Audrey Tang noted in her speech “Digital Democracy in the Age of AI” at the Concordia Annual Summit last September, a democratic deliberative process requires collaborative diversity. Teamwork among government, civil society, and the private sector is essential against disinformation, a battle of cognitive warfare that would continue to put Taiwan’s democratic values and regulatory culture to the test. 

(This piece reflects the author's opinion, and does not represent the opinion of CommonWealth Magazine.)


About the author:

Irene Chou is a policy analyst at Safe Spaces, focusing on strategic business and technological developments in the U.S., China, and Taiwan. She has experience in policy research on American politics and digital privacy across East Asia. She has a B.A. in Political Science and English Literature, honors, from Brown University.

Tatiana Van den Haute is a Policy Analyst at Safe Spaces, a consulting firm focused on Taiwan’s new relationships in time of strategic competition, where she focuses on Europe’s China policies and Middle Eastern affairs. She holds a bachelor’s degree from Sciences Po Paris, where she focused on Political Science and Europe-Asia relations. She can be reached at [email protected].


Have you read?

Uploaded by Ian Huang

Views

528
Share

Keywords:

好友人數