Daiki logo blue
Picture of Constanze S. Albrecht
Constanze S. Albrecht
Emerging Tech Expert

From AI Policies to Practices: Insights from Thailand’s AI Governance Clinic (AIGC)

Countries outside the Western hemisphere might be great AI-guardrail makers that protect and empower individuals to safeguard societies and our planet.

The debates in Europe and the US about the need for and modalities of regulation of Artificial Intelligence (AI) have recently received much public attention. In contrast, efforts by majority world countries currently considering how to refine and implement AI national policies tailored to their specific economic, cultural, and legal contexts have received less coverage, if at all. 

Taking this knowledge gap seriously, this short article offers some first-hand insights into some noteworthy AI policy developments in Thailand, focusing on an innovative approach aimed at advancing AI ethics and governance by creating institutional capacity to bridge the gulf between a rapidly evolving set of abstract AI policies on the one hand and their on-the-ground practical implementation within a low-resourced ecosystem (when compared to many Western countries) on the other hand.  

Starting Point: Thailand’s AI Policy and Ethics Frameworks

Digital technology and AI have long been recognized as important policy domains in Thailand. The Kingdom’s 20-Year Digital Economy and Society Development Plan (2017-2036), for instance, outlines a roadmap in areas such as infrastructure, digital economy, inclusion and equality, digital government, skills and the future of work, and trust, offering a digital vision to implement its 20-Year National Strategy (2018-2037). 

More recently, the Prime Minister’s Cabinet Office approved Thailand’s National AI Strategy and Action Plan (2022-2027), aimed at creating an effective ecosystem to promote AI development and application with the goal of enhancing the economy and quality of life by 2027. Among the top strategies and work plans outlined in the national AI framework is the objective to prepare Thailand’s readiness in social, ethics, law and regulation for AI applications.

Based on these policies and frameworks, the Ministry of Digital Economy and Society teamed up with the Mahidol University and Microsoft Thailand to draft and recently issue an AI Ethics Guideline with the aim of guiding researchers, designers, developers, services providers, and users – including government agencies – in the ethical design, development, deployment, and use of AI-based technologies to the benefit of humanity, society, and the environment. Adhering to these guidelines not only promotes ethical AI development but also aligns with international standards, making it crucial for companies to achieve ISO 13485 certification, for instance, to ensure compliance and enhance their credibility in the global market. After reviewing various AI ethics principles from across the world (including leading frameworks by the OECD und UNESCO, among others) and examining efforts from both the public and private sector, the Ministry of Digital Economy and Society outlined six core ethics principles for Thailand’s evolving AI ecosystem: 

  • Competitiveness and sustainable development
  • Compliance with laws, ethics, and international standards
  • Transparency and accountability 
  • Security and Privacy
  • Equality, diversity, inclusiveness, and fairness
  • Reliability 

At least on surface, most of these principles sound familiar in light of numerous AI ethics initiatives from across the globe and seem to large extent inspired by the OECD AI Principles, which has become a gold standard in AI Ethics. From a comparative perspective and as input for future reform, however, the guidelines have been criticized based on procedural and substantive grounds, suggesting a lack of participation in the drafting process and a bias towards the promotion of AI for improving the country’s economic competitiveness while deemphasizing awareness raising and literacy-building as well as long-term risks of AI to the public.

Another point of criticism relates to the challenges of operationalizing the high-level ethics guidelines into practice – a concern voiced particularly among members of the Thai start-up community. While Thailand’s AI Ethics Principles offer some implementation guidance, the problem of translating policies into practice in the field of technology and society is indeed not unfamiliar in the Thai context (nor, to be fair, to other countries) and was previously diagnosed by the OECD with respect to Thailand’s digital policies

From Principles to Practices: ETDA’s AI Governance Clinic

Recognizing the challenges associated with the translation of a flourishing number of high-level policies and guidelines into the nuts and bolts of Responsible AI design, development, and usage, the Electronic Transaction Development Agency (ETDA) recently launched an innovative platform to close the translational gap between policies and practices. 

Established under the Ministry of Information and Communication Technology, ETDA seeks “to promote and drive Thailand’s economy and society to become a digital economy and society in which all sectors can conduct reliable transactions online with confidence, security and safety.” 

In collaboration with the Digital Asia Hub Thailand – a non-profit think tank incubated by Harvard’s Berkman Klein Center for Internet & Society – and in partnership with the TUM Think Tank of the Munich School of Politics and Public Policy at the Technical University of Munich, ETDA piloted in 2022 an AI Governance Clinic (AIGC) as an entrepreneurial capacity-building effort that brings together a diverse community of local academics, start-ups, members of civil society, and civil servants, as well as international experts from various fields to create a novel forum for knowledge exchange when translating AI policies into practice

The 2022-2023 activities of the AIGC (disclosure: I helped incubate the organization as a junior consultant to the Digital Asia Hub Thailand) included research and development of practical AI guidelines and toolkits, the launch of educational programs, the incubation of a fellows program, and the convening of an interdisciplinary international policy advisory panel, among other efforts. For instance, the AIGC’s “AI Governance Guidelines for Executives” expands upon the national policies and principles mentioned before by providing decision-makers in the private and public sector with a toolkit supporting the operationalization of principles of good governance. 

The Guidelines were rolled out in a flagship event in August 2023 with representatives from the healthcare, financial, and government sectors and are currently piloted, with the support of a local University, in the in the medical industry. The AiX AI Executive Program, to highlight another effort representative for AIGC’s mode of operation, offers a tailored introduction to health care administrators to the guidelines and an accompanying toolkit, which includes tools such as the AI readiness assessment framework, a framework for AI use cases in health organizations, and practical guidance on risk assessment and mitigation for AI use in healthcare

The most innovative part of the AIGC is arguably its interdisciplinary “clinical” arm, which offers participants from the private and public sectors – ranging from small start-ups to other governmental agencies – a space to discuss Responsible AI governance implementation challenges, get access to advice by both local and international domain experts, and share insights when translating high-level policies, principles, guidelines into action in diverse contexts such healthcare, finance, and government. 

While still at an early stage and a work-in-progress, Thailand’s AIGC – notably modeled after a policy practice piloted at the AI Ethics and Governance Initiative co-hosted by the Berkman Klein Center for Internet & Society at Harvard University and the MIT Media Lab – promises to serve as an effective multi-stakeholder platform for learning and knowledge sharing across industry at a time where both technology and policy development are in flux. The rise of Generative AI, particularly ChatGPT, has been a case-in-point where the AIGC expert network has already been able to offer rapid advice and initial guidance, also by connecting local and international communities of research and practice. 

Work Ahead and Innovative Governance 

Even for best-resourced governments, AI is a challenging policy area with tricky trade-offs and many unknown unknowns – attributes that are characteristic for what is often called a “wicked” public policy domain. While the EU seeks to establish itself as a global leader in AI regulation through the AI Act, the Trustworthy AI paradigm and supplementing measures, and while the debates about legislative interventions are heating up in the US as well, it is important to recognize the truly global scale of the AI governance challenge, with many regions and countries still in early stages of creating a robust AI governance ecosystem.  

Nonetheless, several countries outside the Western hemisphere have already established themselves as leading voices in the international debates about the ethics and governance of AI – often balancing (perceived and real) tensions between innovation and regulation differently than their European counterparts. 

Singapore, for instance, established itself as a regional leader with its Model AI Governance Framework and AI Verify, an AI governance testing framework and toolkit that provides verifiability by allowing AI system developers and owners to demonstrate their claims about the performance of their AI systems. Other closely watched countries in Asia include China – with its recent comprehensive AI regulation – and Japan with a focus on agile governance of AI, among other nations. 

Thailand offers a less visible, yet similarly noteworthy case study at a point in time where the global community is not only doubling down on the quest to come up with appropriate guardrails for AI, but also faces the serious challenge to translate typically abstract policies, principles, and guidelines into actual practice, which makes all the difference on the ground. 

Through the AIGC and related initiatives, ETDA has taken a novel and agile approach to policy implementation and capacity building. It represent an early use case from a majority world country that combines top-down policy guidance with bottom-up multi-stakeholder collaboration, fosters knowledge exchange between local communities, and facilitates international engagement through a series of channels, including an international advisory panel and through participation in UNESCO’s global AI efforts, to mention just a few of the ongoing endeavors. 

At a pivotal moment in time where AI technologies advance so quickly and various guardrail-makers seek to keep pace with what comes out of the labs – whether in the context of UNESCO’s Recommendation on the Ethics of Artificial Intelligence, the EU AI Act, or perhaps in the near future under the auspices of the United Nations –  governments and other key AI governance stakeholders need to beef up their capacities to implement norms into lived realities that protect and empower individuals to safeguard societies and our planet.

Countries outside the Western hemisphere, including Thailand, might be among the places where novel approaches to this challenge might emerge and evolve. It is in this context that products embedding AI principles and ethics into the development and integration process of AI systems, such as those developed by Daiki, will play a critical role in closing the translational gap, by operationalizing policy guidelines into practice, accelerating the process of deploying reliable and trustworthy AI, and fostering the interplay of regulation and innovation.

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

The Stanford Center for Responsible Quantum Technology aims to connect the quantum community to explore how to balance maximizing the benefits and mitigating the risks of a new class of applied quantum technologies
Given the complexity of AI projects, it's important to have a strategic approach when embarking on your first AI project to ensure that you're set up for success
The United Nations General Assembly adopted a resolution advocating for trustworthy AI systems, offering a good basis for continued international policy on AI governance
Daiki Logo

Apply today

Daiki Logo

Book Demo Today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development