Singapore’s AI Governance Model is Setting an Example for the Rest of the World

Overview of Singapore’s AI Governance Framework: Why this is the ideal model for other governments

The Singapore AI Governance Framework released the first edition of the model on 23rd January 2019 for wider consultation, acceptance, and feedback. This edition of the model serves a detailed and ready-to-implement direction to private sector organizations addressing the key ethical and governance issue when applying AI solutions. The model framework aims to encourage public understanding and trust in technologies by elucidating the work of AI systems, creating open and clear communication, and building good data accountability practices.

Within one year of its first edition model framework, the Singapore AI Governance Framework released the second edition on 21st January 2020.  This model involves additional considerations that enhance the original model framework for wider significance and usability. This model continues to hold a technology approach that enables to complement sector-specific directions and requirements.  

With this evolution, Singapore developed its digital economy. It has opened an area of trust from where organizations can benefit from technology innovations as well as consumers gain confidence to adopt and make use of AI. The balanced approach of the Singapore Framework model facilitated innovations while safeguarding consumer interest and also serving as a major global point of reference. Singapore is one of the early movers that has established the guidelines to control the use of AI.

 

The fundamental concepts underlying the Singapore Model Artificial Intelligence Governance Framework are:

  • Fairness, justification, transparency in the process of decision making
  • Human-centered AI solutions (that strengthens human capabilities including their safety and well-being).

 

Key areas to strengthen of pillars of Singapore Framework

  • Operations Management

Organizations must ensure that the datasets that are used for building models are impartial, exact, and accurate in order to avoid unfair decisions. Organizations should take into account the measures to improve the explainability, repeatability, and traceability of their AI algorithm. Other than this periodic reviewing and updating of data sets will be required on an ongoing basis.

  • Internal Governance structures

 Clear roles and responsibilities should be allocated by the governance structure within the organization along. Responsibilities should include

  1. Application of risk-controlled measures
  2. Monitoring, maintaining and reviewing AI models
  3. Focusing on appropriate decision-making AI model
  4. Fiving training to staff dealing with AI systems. 
  • Stakeholder Communications

 Accurate communication stimulates trust. Therefore, organizations should provide accurate information and be clear on whether Artificial Intelligence is used in their products or services. Organizations should put communications channels for feedbacks and decision reviews. Easily understandable language should be used by the organization. 

  • Determining AI Decision-Making Model

 Before applying AI solutions, the organization should take into consideration their objectives and evaluate them against the risk of using AI also taking into account the difference in social norms and values between countries and jurisdictions. Based on the evaluation the organization can identify the degree of human involvement in the decision-making model. 

Singapore’s framework AI model can be adopted by any organization that develops or uses AI. In fact, 75 organizations signed up for the model according to the SCS research.  According to Achim Granzen, principal analyst at Forrester research agency, Singapore’s model is already in use in large companies and corporations which includes global banks and tech companies to authorize their technology and risk management frameworks. 

The Singapore model recommends rules on a risk-based management approach to address all the technology-based risks associated with AI. The model is designed collaboratively and practical steps are taken to control risk factors. Ethical measures, self-assessment guide is taken into account in the framework model. All have been developed in close collaboration between government agencies, corporations, major tech companies.

Singapore’s framework has been identified as a secured and steady foundation for the responsible use of Artificial Intelligence and for its future evolvement.